Fullstack Monorepo with TypeScript and GraphQL
This post will explore how to build a monorepo which consists of TypeScript, for both frontend and backend, as well as a GraphQL api. It's a stack I've wanted to explore for a while and it's always fun to whip something up quickly just for the sake of exploring and learning new tech. I'm not a newcomer to either TypeScript or GraphQL but I am to having both ends be fully typesafe and building it out in a monorepo.
To be more specific about the project, the stack includes the following:
- Monrepo
- pnpm
- Turborepo
- Frontend
- TypeScript
- Vite.js (React)
- Backend
- TypeScript
- Node.js (Express)
- GraphQL
- Postgres
Small disclaimer, this is an exploratory project whose focus is on exploring the tech stack described above. Right, let's get on with it shall we?
Introduction
A monorepo is a code repository that contains multiple projects, e.g. a backend API and a frontend client, that can be both developed and deployed together.
Any given day in which I am learning a new technology—which as a developer feels like daily—I ask one key question which I find helps solidify knowledge in a useful manner; I ask why?
So, why a monorepo? Essentially it allows us to share code more easily, e.g. share a component library with multiple clients. Gives us easier dependency management. Allows us for a more simplified deployment as well. Biggest perk though? Not over-engineering your application when it's not needed (cough cough microservices).
Following the thread, why TypeScript? Barring the boring 'professional' answer and instead exposing my own: why not? It's safer, it's fun (trust me it is) and it allows you to write better code. Maintaining unsafe, ambiguous and unpredictable JavaScript is not fun. Setting up TypeScript is not that hard and will reap you far more benefits than annoyances. It already won, so I'd jump as soon as you can. I did so about a year and a half ago and I do not regret it.
Next up, why GraphQL? This one is trickier. I remember 3 years ago GraphQL was being touted as the REST(ful) killer. As in, we won't write RESTful API's anymore. I was a wide-eyed junior dev back then and absolutely bought the hype, despite not knowing any better. Which I'm sure is a cycle I'll repeat with new incoming tech, but alas, so it is the way we learn.
It may have not killed RESTful API's but it doesn't mean it hasn't found its place. In fact, I believe the community is still learning where it exactly fits best, i.e. services aggregator, multiple-clients api's and more. Point being, the tech is not going away and it has its use-cases. I took this is an opportunity to try it again with a more educated mindset, as in exploring it to find out first hand its strengths and weaknesses.
That's some contexts as to the why of this small case-study project.
You can find all the code for this project in this repository. Feel free to clone, PR, critique, praise, scoff and gasp as it is just some put together code!
The final project is hosted here
Enough table-setting, on to the code!
Setting up the Monorepo
We will use pnpm as our package manager as it supports workspaces out of the box and we'll also use Turborepo for a faster building pipeline and quality of life improvements when dealing with monorepos.
Let's look at the structure of our project:
. ├──app │ ├──api │ └──client ├──Dockerfile ├──package │ ├──eslint-config-custom │ └──tsconfig ├──package.json ├──pnpm-lock.yaml ├──pnpm-workspace.yaml ├──README.md └──turbo.json
The pnmp-workspace.yaml
file looks like this:
packages: - "app/**" - "package/**"
This informs pnpm that any directory with a package.json
file inside the app
or package
directory can be considered a package in itself. The app
directory will contain our applications, i.e. api
being our backend web application and client
being our frontend client application. The package
directory, arbitrarily and perhaps confusingly named, will contain shared code we can add to our other projects, e.g. a custom eslint config or a custom TypeScript config.
Developing the Backend
In our backend we will develop a Node.js web application using the Express framework and the GraphQL Yoga package to build out our GraphQL API. Let's look at the directory structure of our api
application:
./app/api ├──codegen.ts ├──nodemon.json ├──package.json ├──src │ ├──app.ts │ ├──config │ ├──lib │ ├──middleware │ ├──modules │ ├──routes │ ├──server.ts │ ├──tsconfig.json │ └──types ├──tsconfig.json └──vitest.config.ts
Let's break some of the most important parts down:
- The
server.ts
file is our main entry point where we start our server - The
app.ts
file is where our express application lives.- It's nice to decouple your express app from the actual server initiation as it allows for modularity which can help us during development and testing.
- Our
routes
directory is where we compose our Express routes- The
graphql.route.ts
file is where we create our graphQL server with the GraphQL Yoga package - The
api.ts
file is simply our express application. If we wanted to add more routes to our app we would simply compose a new one inside this directory, e.g.health/health.route.ts
- The
- If you peek inside
graphql.route.ts
you'll find that inside we merge the different parts of our GraphQL schema when setting the up the server. These different parts are modularized through themodules
directory. Let's explain it a bit more:- Each module has a
typedefs/*.graphql
file in which part of the GraphQL schema definition corresponding to that module is defined. We'll look at an example later below to clear up that confusing sentence. - Each module directory also has a
resolvers.ts
file in which we define the business logic for the queries and mutations corresponding to the given module. - Some modules have a
model.ts
file. Inside these files we have business logic corresponding to the actual queries we make to our database instance. In this case it's a couple of functions encapsulating queries to a Postgres database through thepg-promise
package. Most people opt for an ORM such as Prisma now a day, but I just like writing my own queries.
- Each module has a
All well and good, but let's look at the recipe
module code to understand a bit more how this is connected. Let's first take a look at our module/recipe/typedefs/recipe.graphql
schema definition:
extend type Query { recipe(id: ID!): Recipe recipes(limit: Int, offset: Int): [Recipe]! } extend type Mutation { addRecipe( title: String! description: String! instructions: [String!]! ingredients: [String!]! image_url: String author: ID! ): Recipe! } type Recipe { id: ID! title: String! description: String! instructions: [String!]! ingredients: [String!]! image_url: String author: User! created_at: DateTime! updated_at: DateTime! }
As you can see we have both Query
and Mutation
definitions in this schema. Also noticeable is that we're only defining the schema corresponding to "Recipes" in our app. This schema will get merged with other schemas before instantiating the server.
Let's now look at the module/recipe/resolvers.ts
file. I won't show the whole file, but just a snippet in order to make it more comprehensible. Feel free to check the complete file in the repo.
import { z } from "zod"; import type { MutationAddRecipeArgs, QueryRecipeArgs, QueryRecipesArgs, Recipe, } from "../../types/types"; import { user_resolver } from "../user/resolvers"; import { createRecipe, getRecipeById, getRecipes } from "./model"; async function recipe( _parent: any, args: QueryRecipeArgs, ctx: any, info: any ): Promise<Recipe | void> { let id = z.string().parse(args.id); try { let recipe = await getRecipeById(id); recipe.author = await user_resolver.Query.user( null, { id: recipe.author_id.toString() }, ctx, info ); return recipe; } catch (error) { logger.error("%o", error); } } // ... let recipe_resolver = { Query: { recipe, //... }, Mutation: { //... }, }; export { recipe_resolver };
Let's break some of it down. Also don't mind the casing inconsistencies in function naming, that's just me trying to convert into a snake_case
aficionado. In a real-world project you would adhere to whatever style it uses.
- Yes, I know
zod
is a bit redundant here, GraphQL schemas already provide the validation. Honestly this is just a habit by now of always validating on the server. Not really needed, but you never know what weird edge case you might be covering here. Besides more practice withzod
can't hurt. On to the important bits. - Our
recipe_resolver
will be the object containing all the resolvers for bothQuery
andMutation
s corresponding to theRecipe
schema. - One example of such resolver is the
recipe
query.- Inside it we use the
getRecipeById
model function, defined in./models.ts
which simply runs an SQL query to our database to retrieve a given recipe by its id number. - Interestingly though, the next line we add a new property to the received
recipe
object; As an aside, a more rigorous approach would be to not mutate the object itself but this works.- Notice that to define this property we use the
user_resolver
imported frommodules/user/resolvers.ts
. This pattern is called using nested resolvers. In which we are allowed to re-use logic for one resolver in another, which then allows GraphQL to perform recursive queries. - In this case it is allowing us to query for a recipe and query for it's related author as well.
- Notice that to define this property we use the
- Inside it we use the
- We can perform better error handling than simply logging, but it goes surprisingly far when debugging.
That concludes a quick sneak peak into how the GraphQL server is built. If we look inside routes/api.ts
we will find that our server is being served at the /v1/graphql
endpoint of our application.
Wait! What about those nice Types we had in our Resolvers, where did they come from? Well they come from a nifty little package called @graphql-codegen/cli
. This package can read from a GraphQL schema—through different methods—and automatically generate types for us that we can then use for type-safe API authoring. It can also be used for client side code, which is more common actually. We won't get into the full depth of it in this post but know that it's a key part in creating a robust full-stack application with end-to-end type-safety.
Developing the Frontend
Whew! That was a lot of information to process for our backend, and we skipped a lot of it! The frontend however is thankfully more straight forward.
Let's talk about why we chose Vite.js for our React application. In a personal project I would rather choose a framework like Next.js or Remix and use it as a BFF (Backend for Frontend). However I wanted to give old SPA React a spin as a lot of real world projects still have this setup. Vite.js makes it super easy to create a new React project with all bundling and compilation (TypeScript) setup for us, seriously it's a breeze. Just use one of its templates and start hacking, serious props to this project, it should be the de-facto starter for any client-side React projects.
Not much more to it other than installing some packages, e.g. react-router-dom
for routing, tailwindcss
for styling and @apollo/client
for state management. As you can tell this is most definitely a backend-focused project.
The @apollo/client
package is a simply a server-state management solution—the likes of react-query
and RTK Query (@reduxjs/tookit/query
). What does "server-state management" solution mean? In basic terms, it means it handles fetching, caching and revalidation for you, because when you deal with server-state keeping your UI in sync with your server state after each single interaction is both hard and important. These libraries help us with this.
Apollo however has its niche with specifically GraphQL servers, which are notoriously more difficult to cache and optimize. It has a tons of features that help to smooth all the rough edges of working with a GraphQL server. In our case we're simply using it for simple querying and mutations.
Let's take a look at how its setup in our client application:
// @client/src/main.ts import React from "react"; import ReactDOM from "react-dom/client"; import { RouterProvider, createBrowserRouter } from "react-router-dom"; import { ApolloClient, InMemoryCache, ApolloProvider } from "@apollo/client"; import App from "./App"; import "../styles/tailwind.css"; import Recipe from "./routes/$recipe"; import User from "./routes/$user"; let router = createBrowserRouter([ { path: "/", element: <App />, }, { path: "/recipe/:id", element: <Recipe />, }, { path: "/user/:id", element: <User />, }, ]); let client = new ApolloClient({ uri: `${import.meta.env.BASE_URL}v1/graphql`, cache: new InMemoryCache(), }); ReactDOM.createRoot(document.getElementById("root") as HTMLElement).render( <React.StrictMode> <ApolloProvider client={client}> <RouterProvider router={router} /> </ApolloProvider> </React.StrictMode> );
Let's break it down:
- We are instantiating a new apollo client with a basic setup.
- For the
uri
property we pass the pre-configuredBASE_URL
environment that Vite provides for us, as we will be serving this client application through the express web application, i.e./v1/graphql
- For the
- We are wrapping our React application, which includes the
RouterProvider
from react router, with theApolloProvider
. This component will allow us to access apollo state through out our whole application.
Let's take a look at a simple route from our application to see how this works, for example the $recipe
component which renders a dynamic recipe given an id in the URL segment:
import { Link, useParams } from "react-router-dom"; import { useQuery, gql } from "@apollo/client"; import { nanoid } from "nanoid"; import { ImageWrapper } from "../components/ImageWrapper"; export default function Recipe() { let params = useParams(); let recipe_id = params.id; let GET_RECIPE = gql` query GetRecipeById { recipe(id: ${recipe_id}) { id title description instructions ingredients image_url author { id username } } } `; let { loading, error, data } = useQuery(GET_RECIPE); if (error) { return <p>Error: {error.message}</p>; } if (loading) { return <p>Loading...</p>; } let recipe = data.recipe; return ( <main className="center mlb-xl"> <article className="stack"> {loading ? ( <> <h1 className="text-4"> Hang tight. We're getting your recipe! </h1> </> ) : ( <RecipePresentation recipe={recipe} /> )} </article> </main> ); }
The useQuery
hook provided by @apollo/client
takes in a GraphQL query as its parameter and will provide loading
, error
and data
values that will allow us to build our UI the way React shines most, declaratively and through state.
We could keep going on about our client, specifically talking about caching and revalidation which are important properties of the @apollo/client
package, but in the interest of time let's move on to deployment!
Building and Deploying
As mentioned above, in reality we will be deploying one application, the api
application, as it will also serve the bundled code for our client. That is, both our API and the client application will live in the same domain.
We do so by configuring our Vite application to build its output at app/api/dist/client
. Then we can serve our client application by configuring our app/api/src/app.ts
directory, i.e. our Express application.
import type { Request, Response } from "express"; import express from "express"; import cors from "cors"; import { mw_morgan } from "./middleware/morgan.mw"; import { api } from "./routes/api"; import { logger } from "./lib/logger"; import path from "path"; let app = express(); // middleware app.use(mw_morgan); app.use( cors({ origin: "*", credentials: true, }) ); app.use(express.json()); app.use(express.urlencoded({ extended: true })); // server public content let publicPath = path.join(__dirname, "./client"); logger.debug(`Public path: ${publicPath}`); app.use(express.static(publicPath)); app.get("/health", (_req: Request, res: Response) => { return res.status(204).send("OK"); }); app.use("/v1", api); app.get("*", (_req: Request, res: Response) => { return res.status(404).send("404 - Not Found"); }); export { app };
Now we can simply write corresponding build scripts for both our client and api applications and run them together thanks to turborepo!
@app/client/package.json
"scripts": { "dev:vite": "vite", "dev": "run-p dev:*", "build": "tsc && vite build", "preview": "vite preview", "generate:css": "tailwindcss -i styles/tailwind.css -o src/styles/tailwind.css" },
@app/api/package.json
"scripts": { "build": "run-s clean build:*", "build:prod": "swc src -d dist", "dev:swc": "swc src -d dist --watch", "dev:codegen": "pnpm generate:codegen --watch", "dev:server": "nodemon", "dev": "run-p dev:*", "clean": "rimraf dist", "generate:codegen": "graphql-codegen --config codegen.ts", "test": "vitest --run", "test:watch": "vitest" },
/package.json
"scripts": { "start": "node ./app/api/dist/server.js", "build": "turbo run build", "dev": "turbo run dev", "lint": "turbo run lint", "clean": "turbo run clean", "format": "prettier --write \"**/*.{ts,tsx,md}\"" },
Now that our scripts are setup we want to deploy this thing! In my case I like deploying node applications through docker containers, however serverless is always an alluring alternative as it requires less setup and server management overhead. Platforms like Vercel and Netlify are great if you're going for serverless deployment. With containerized applications you might want to go full DevOps and spin up new boxes with your preferred cloud platform, however a good balance is to use a PaaS. My favourite to use is Railway. Other options include Render and Fly.io.
In order to have our application containerized we need to create a Dockerfile
. I won't get too much into it as it starts to become outside the scope of this already long post but feel free to look at the Dockerfile in the repo for reference. Suffice to say, whenever building Dockerfile's try to break it out in incremental steps so as to leverage cached builds in your deployments, making them faster, saving you money and allowing for easier debugging!
Once we have that Dockerfile
in our repo we can simply point our new Railway project to our Github repository, where it will recognize the Dockerfile
and run the appropriate commands to build and deploy the image in a new virtual box. All you have to do now is commit code and continue developing your full-stack application!
Conclusion
In conclusion, we've covered the basics of building and deploying a full-stack monorepo using TypeScript and GraphQL. With this setup, you can efficiently manage and scale your full-stack applications with ease.
If you have any doubts, questions, suggestions, corrections, grievances or compliments, feel free to write to [email protected] or @sillypoise, I'll happily answer any of the mentioned above.
Until next time, cheers!