Build a documented / type-safe API with hono, drizzle, zod, OpenAPI and scalar

Table of contents

Building an API from scratch can be seamless and efficient when you leverage the power of Hono and OpenAPI for comprehensive documentation and type safety.

Have you ever come across API documentation like this, where you can see literally every single endpoint documented? You can dive in and see all of the possible response types, view examples of them, and even access schemas. Additionally, you can test those specific endpoints; it actually works and hits the API, allowing you to set the headers and everything else like that.

In this video, I’m going to show you how to set this up and how to build an API from scratch using Hono that fully documents our API using the Open API specification. Our specific API can be accessed at /dooc, where every single endpoint is documented, and you can see all of that here. Open API is actually an ecosystem of tools that can use this documentation to create interactive documentation like this or to generate client-side libraries and other resources.

Over the past few weeks, I’ve been building apps with Hono. I specifically have been working with their middleware called Zod Open API, which essentially allows you to use Zod to define all of your schemas and then define your route contracts. This approach provides type safety when you’re defining your request handlers, and all of that gets turned into fully interactive documentation.

Now, we’re not just going to talk about Open API; I’m going to show you all the things I’ve learned about Hono as well and share some best practices. I’ll demonstrate how I set up the app in a nice and clean way, creating all of these little helpers that essentially set up our middlewares and everything else that we need. When we’re setting up Open API, we can do it in a nice scalable way. For instance, we can set up all of our route definitions in one spot and all of our handlers in another, but everything remains type safe.

If we look at our route definitions, we can see that they are organized, and I have helpers that assist us in defining response types for specific status codes. From there, we export the types of these route definitions, which we can then use to define our request handlers. This allows us to define our handlers in a separate file while maintaining full type safety, ensuring that we know exactly what we should be responding with for a specific method based on that specific route.

Beyond that, Hono makes it really easy to test these endpoints because they have a built-in test client that is essentially similar to the Hono RPC client. This client can be used inside client-side code and offers full type safety, providing a really nice testing experience. Furthermore, we can connect it all up to Drizzle. Drizzle Zod allows us to define our tables and then use those table definitions to create our Zod schema. We can use these schemas directly inside our Open API definition, creating a single source of truth where we define our tables, get our schemas, and then use those schemas inside our route definitions. Consequently, all of the types flow through when we’re defining our handlers, resulting in a beautiful experience.

In this video, I’m going to show you how we set this up from scratch. As I mentioned, I’ve been working with Hono for the past few weeks and was doing a lot of boilerplate tasks repeatedly. To streamline this, I created my own library called Stoker, which lets you "stoke the flame" since Hono means flame. This library includes a bunch of built-in utilities, middlewares, and various helpers that reduce the boilerplate when defining your API in this manner. I’ll show you how this library works and the source code, as you can write this code directly into your own app if you prefer not to use the library.

Additionally, I’ve created a starter template called the Hono Open API starter, which is essentially what we’re going to build in this video. It is fully documented and tells you where everything is, so you don’t have to watch the entire video to access the code I’m going to show you. You can start from there and dive into the code directly.

As I said, I’ve figured out a lot of niceties and best practices for working with Hono. Even if you don’t want to use the starter kit or if you’ve already been working with Hono, definitely check the timestamps in the description so you can skip ahead to the parts that interest you and that you can potentially adopt into your own Hono apps. If that sounds good to you, let’s get into it. My name is CJ, and welcome to Syntax.

Now, first up, let’s generate an application. If you head over to Hono...

=> 00:04:04

Start coding smarter, not harder. Use preconfigured tools and best practices to streamline your development process and focus on what really matters.

Fully documented, this guide tells you where everything is, so you don't have to watch this whole video to basically get the code that I'm going to show you. You can start from here and kind of dive in and get your hands dirty with the code directly. Like I said, I figured out a lot of niceties and best practices for working with Hono. So, even if you don't want to use the starter kit or maybe you've already been working with Hono, definitely check the timestamps in the description. This way, you can skip ahead to the parts that interest you and that you can potentially adopt into your own Hono apps. If that sounds good to you, let's get into it. My name is CJ; welcome to Syntax.

Now, first up, let's generate an application. If you head over to Hono.dev and click on Getting Started, they have a CLI tool that you can run to generate an app. I'll copy this command and then run it in my terminal. Now, I'll give it a name; we're going to be building an API for tasks, so I'm going to call this the Tasks API. Now we need to choose where we're going to deploy. I'm actually just going to choose Node.js, and I'll show you how we can set this up so that if we want to deploy somewhere else later on, we will be able to do that. But for now, we're just going to choose Node.js. Yes, we'll install dependencies. I'm using pnpm. Here we go!

Now, let's head into that directory and then open it up with VS Code. You can see that by default, we get a source directory with an index.ts that just has a basic Hono app set up inside of here. If we take a look at our package.json, it's just using TSX to serve up that app. Now, before we start writing any code, I want to get ESLint set up so I have all of my preferred formatting settings and also editor settings. For this, I'm actually just going to use Ant Fu's ES config, which is a preconfigured config. If you don't want to change any of the settings, literally all you have to do is install it and then export it, and you don't have to mess around with a bunch of ESLint plugins. So, it's really nice.

Now, he provides a CLI tool, and this will automatically create the config file and also set up your VS Code folder. If we run this command, it'll ask us some questions. It says there are uncommitted changes; that's okay with me. You can use this ES config for a lot of different types of projects. We're in a backend app, so I'm not going to select any framework here, and I am going to choose to use the formatter. Now, I'm also going to generate that settings.json, and we should be good to go.

At this point, if we check out our app, first of all, we have some good default settings here in this file. I especially like the one that does automatic fixing of files using ESLint. So, anytime you save your file, it's automatically going to get formatted and linted, which is great. Then, of course, we have our config file, which by default is super basic and has all the default settings. I have some of my own preferred settings, and I just keep this in a GitHub Gist. So, I'm actually just going to overwrite these settings with my preferred settings.

Let's go ahead and update dependencies because it added those dependencies, but it didn't actually install them. If I just do a pnpm install, that'll make sure to download those dependencies. Now, if I head into a source file and then save it, it should use all of my default settings. If this doesn't work automatically, you might need to reload VS Code. If you press Command + Shift + P and then do Reload Window, that will ensure that all of your latest ESLint settings are loaded, and then it should be able to apply those settings.

Lastly, we should set up a lint script. If you head over to the package.json, we can add a new script here called lint, and we'll just run eslint . so it'll run it on the current directory. Now, from the command line, we can do npm run lint, and that'll actually run the linter on all of our files. We can see there are already some issues. If I want to automatically fix these, I can just do npm run lint -- --fix, and anything that's auto-fixable will be fixed for me. Then, I'll just see any warnings or errors that are left behind. Right now, we do have a console statement.

Now, I like to have fix as a separate command, so I'll add that as a separate command here. I can do lint:fix, and this will just be npm run lint -- --fix. So now, if I ever want to fix all of my files automatically, I have a separate command I can run. One thing I like to do for the TypeScript setup is to make it so that I can import with an alias. For example, if I'm importing anything from src/lib or src/routes, instead of needing relative paths, I can do @/lib or @/routes or whatever the folder may be. Here in my tsconfig, I can...

=> 00:07:29

Automate your coding workflow by creating custom commands and using path aliases for cleaner imports.

To streamline my development process, I can automatically fix issues in my code by running the command npm run lint D- fix. This command will address anything that is autofixable, allowing me to focus on any remaining warnings or errors. Currently, we have a console statement in our code, and I prefer to have the fix functionality as a separate command. Therefore, I will add a new command called lint:fix, which will also execute npm run lint D- fix. This way, I can easily run a command to fix all of my files automatically whenever I need to.

Additionally, for my TypeScript setup, I like to configure it so that I can use import aliases. For instance, instead of using relative paths when importing from Source/lib or Source/routes, I can simply use @/lib or @/routes, depending on the folder. To achieve this, I will modify my tsconfig file by adding a section called paths, where I will map @/* to anything in the source directory. This change will allow me to have cleaner imports without the hassle of relative paths.

Next, we will install the hono Zod open API package. Once this package is installed, it will be the dependency I use whenever I need to create a Hono app. To set it up, I will refer to their repository for the installation command. After installation, I can utilize hono Zod open API to create an app. In the documentation, I can see that we need to import open API from Hono and use it to create an app. This package acts as a wrapper around Hono, ensuring that my app functions in the same way.

To keep my code organized, I will extract the app definition into its own file. I will create a new file called app.ts, where I will define the app and then export it as the default export. After that, I will import this exported app into index.ts. Once I save the changes, everything should be set up correctly. Essentially, all the code I write for Hono and Open API will reside in this file, and whenever I need to run it locally, I will reference this file. If I decide to deploy to platforms like Vercel or Cloudflare, I can create an updated entry point that uses a different server while still passing in the same application.

Before starting the server, I will update the log to display localhost: followed by the port number. This adjustment will allow me to click the URL directly from the console. Additionally, I want to allow a console log in this specific instance, so I will disable the linter warning for this line. By using command + period, I can disable the no-console rule for this line, eliminating the linter error.

Now, I can start the app. Referring to my package.json, I see that there is a Dev script that watches index.ts. By running npm run Dev from the command line, my Hono app should start up and listen for requests. If I navigate to localhost:3000, I should see the index route.

One of the first tasks I like to accomplish when setting up a Hono API is to ensure that I have a not found handler that responds with JSON in a consistent manner, as well as an error handler. By default, there is a built-in not found handler that responds with a 404 status. However, since this will be a pure JSON API, I want to customize the response to also return JSON. According to the Hono documentation, there is a not found method on the app that allows me to pass in a custom handler.

To avoid repeating this setup in every app I create, I developed a custom package called Stoker. As I set up this Hono app, I will utilize some of the built-in methods and utilities from Stoker. The first utility I will use is the not found handler, which I can import from Stoker and pass into the not found method. All of the source code for Stoker is available on GitHub, so if you prefer not to add another dependency to your app, you can simply copy and paste the relevant code.

In the middleware folder of the source directory, I have included several utilities that we will be using. The not found handler essentially responds with an object containing a message that includes the path that was not found. Additionally, my library includes status code constants and status code phrases. For example, it uses the status code not found, which corresponds to 404. This approach helps me avoid hardcoding numbers throughout my codebase, making the status code helpers quite useful.

=> 00:11:02

Consistency in API responses is key for a smooth user experience; always return JSON, even on errors.

I have here the first one, which is not found, so you can just import it from stoker and then pass it into not found. Now, all of the source code for Stoker is on GitHub, and so if you don't want to add another dependency to your app, feel free to just copy and paste.

If you look in the source directory, I have a middleware folder, and then I have a few of these that we're going to be using. If you look at the not found Handler, essentially it just responds with an object where the message includes the path that was not found. The other thing that my library also includes are status code constants and status code phrases. You can see here that it's using the status code not found, which is actually 404, but I don't want to hardcode numbers all over my codebase, so the status code helpers are really useful as well.

Let's install the stoker package. If I do npm install stoker, we get it, and now I can use it. So right here, if I say app.not found and then search for not found, you can see that it auto-imports from stoker. Now, with that setup, if we try to go to something that isn't found, like banana, it's always going to respond with Json data. This is great because your API clients don't have to worry about figuring out what kind of data your API is responding with; in this case, it's always going to respond with Json.

The other thing we want to do is set up an error Handler so we can have that consistent Json response. Built into hono is another method called on error. You can pass in an error Handler and then customize the error response. To see this in action, let's actually create an endpoint that always throws an error. If we do a GET on /error, all this is going to do is just throw an error that says oh no.

But let's see what happens if you don't have this on error Handler set up. By default, if we go to /error, you'll see that it just responds with a generic 500 internal server error. However, we want to have consistent responses, so we can write a custom error Handler that always responds with Json. Again, I have a custom middleware inside of stoker that's called on error, and you can just pass it in.

If you want to take a look at the source code, essentially what it does is check to see if any code before it has set a status code. If it has, that's what it's going to set the status code as. So if, for whatever reason, you had set a custom status code here like c.status and maybe you did like 422 or something like that, this piece of code will take that into account and actually use that status code in the response. Aside from that, it's going to grab that error object and include the message. If NODE_ENV is not set to production, it will show the stack Trace.

Now, let's add that error Handler. If I do app.on error and then pass in on error, which comes from stoker middlewares, now if we hit that /error route, it's going to respond with Json and then include the stack trace. This is nice because if, for whatever reason, you forget to do your own error handling, this will always catch it and respond in a consistent way.

Next, I want to set up some request logging. By default, hono does not log anything whenever you see incoming commands, and they actually have a built-in request logger. If you look in their docs and search for logger, this one is just built-in and it's pretty basic. But let's go ahead and set it up so we can see how it looks. If I just do app.use and then pass in logger, which comes from hono logger, now if we take a look at the command line, anytime a request comes in, we're going to see it logged out here.

If I make this request and then look back at the command line, you'll see that a request came in for /error and then a response was sent back with error code 422. Now, this Handler is perfectly fine if you just want to see the basics of incoming and outgoing requests, but in production, you typically want to see a lot more information about the request so it's easier to debug. The standard for that is to use some kind of structured logging, and Pino is used heavily in the world of Node.js.

This is a generic structured logger, so you can just pass in logs, and by default, it logs as Json. There are various tools that can actually consume these logs and display them visually, so it's a great way to add logging to your app. They have a bunch of helpers for all of the other common web frameworks. There isn't an official one for hono yet, but I did find one that I like. If you search npm for hono Pino, you'll see two different options. I tried them both out and liked this one just because it logs more information about the request and is a bit more customizable.

So, we're going to install hono Pino and also Pino into our app. I'll run that install, and now we can set it up.

=> 00:14:42

Structured logging with Pino transforms your app's logs into clear, customizable JSON, making debugging a breeze.

To implement structured logging in our application, we will utilize Pino, a popular logging library in the Node.js ecosystem. Pino is a generic structured logger that allows you to pass in logs, and by default, it logs in JSON format. This format is beneficial because various tools can consume these logs and display them visually, making it a great way to enhance logging in your app.

While Pino has several helpers for common web frameworks, there isn't an official one for Hono yet. However, I found a suitable alternative. If you search npm for "hono Pino," you will find two different options. I tested both and preferred one because it logs more information about the request and offers greater customization. Therefore, we will install both hono Pino and Pino into our app.

To set up the logger, I like to create a folder called middlewares. This is where I store any custom middleware or modifications to existing middleware. Inside this folder, I will create a file named PinoLogger.ts. Although the documentation for setting it up is limited, I have examined the source code and identified the configuration interface, which outlines the available options. I will show you the key elements we want to customize.

From this file, we will export a function called PinoLogger, which will return the logger from Hono. We will need to manually import it. By importing the logger from hono Pino, we can access the logger functionality. Let's take a look at what we get by default. I will import this logger into my app and replace the existing logger with PinoLogger. Now, our app is ready to handle incoming requests. After making a request, we can observe that it logs significantly more information, formatted as JSON objects.

During development, I prefer a more readable format for logs. There are several plugins in the Pino ecosystem, including one called Pino Pretty, which formats the logs for better readability. I will install this plugin using npm install pino-pretty. Once the plugin is installed, we can use it by passing an instance of pretty when creating a Pino instance. After reviewing the examples in the configuration interface for hono Pino, I discovered that we can pass in an object of options, including a Pino instance.

Now, I will import pretty from pino-pretty and pass it into our logger configuration. After making a request to our app, we can see that the logs are now much more readable, complete with colors and other formatting enhancements. This is particularly useful during development, as it allows me to easily identify incoming requests, headers, and response status codes.

Another customization I want to implement is for the request ID. Unique request IDs are crucial for tracking logs, especially when issues arise. By default, the request IDs simply count up, which can be problematic in serverless environments or if the server restarts. To ensure uniqueness, we can pass options to the logger, specifically for the request ID, where we can define a custom function.

I will use the crypto API, which is built into web standards, to generate a random UUID for each incoming request. After implementing this change, I can verify that the request IDs are now unique UUIDs. This is advantageous for distributed deployments or serverless architectures, as it guarantees that all request IDs remain unique, regardless of their origin.

Additionally, a notable feature of the Pino logger is that it can be accessed within your request handlers, allowing you to log structured data using the Pino logger. For instance, if I am within a request handler...

=> 00:18:16

Embrace structured logging for clarity and efficiency; it transforms chaos into organized insights.

In this discussion, there is an option for request ID where we can pass in a custom function. What I'll do here is just use the crypto API. The crypto module is built into the web standards, and we can then say random UUID. Now, every incoming request is going to have a request ID that is a random UUID.

Let's take a look at this in action. If I make a request to my app, we can see that these request IDs are now UUIDs. What's nice about this is that if I have distributed deployments or serverless architectures, all of these request IDs will be unique, regardless of where those requests are coming in.

Another cool feature of the Pino logger is that you can actually access it inside your request handlers and log using structured logs. For instance, if I'm inside a request handler, I can access the Pino logger instance. By doing C.VAR.logger, which is the default name, I can then access any Pino logger method.

Pino provides various logging levels such as Trace, Debug, Info, Warn, Error, and Fatal. These are all different levels of logging that you can use now that you have access to the logger. For example, I can say info and log a message like "wow log here." Now, whenever this endpoint gets hit, we will actually see this log in the output of the Pino logs.

If I make a request again, we can see that the info log appears right there. This is beneficial because instead of having console logs everywhere, we can have structured logs. If we turn off pretty printing, we can see what these logs look like when loading them into some external tool. By turning off pretty printing and making a request, we can observe that the log isn't just text; it also includes the time it was logged, the process ID, and other relevant information.

The idea with structured logging is that other tools can consume these logs and group them by time and various other attributes. However, I don't want to have pretty printing on all the time. Therefore, I want to enable pretty printing only if we're not in production. To achieve this, we can access it from process.env. Here, I can say process.env.NODE_ENV, which is the standard environment variable that indicates what your environment is. Typically, when deployed to platforms like Vercel or other environments, NODE_ENV will be set to production.

So, we can say if NODE_ENV is equal to production, then we won't do anything, and we will just set it to undefined. Otherwise, we will invoke the pretty instance. We are currently getting a linter error about process.env, which we will fix soon.

If I start up my app and set NODE_ENV to production, by default, those logs will not be pretty printed. This is what you want in production because you want to be able to consume these logs using some sort of log aggregation tool. However, if I'm not running in production, we should see the pretty printed log. If I make a request here, we can see the pretty printed logs.

To fix the type error, we can set up our Hono app so that it recognizes these other variables that have been set using some custom TypeScript types. If you look in the Hono documentation under generics, you can see that we can pass in a generic type to the Hono instance and specify what variables are available.

In the Hono Pino source code, it shows that they are setting logger to be P logger inside the variables property. We can set that up by creating a custom type, which we will call app bindings. Inside this type, we have a variables property. You can set all kinds of custom variables, but in this case, we will just let it know about the Pino logger.

I will specify that logger is an instance of Pino logger and will import that type from Hono Pino. Now, if I use this app bindings type when I access logger, it will be of type Pino logger. The way to use this type is by passing it in as a generic whenever we create the app instance.

By passing in my app bindings, we can see that the type error goes away, and now I get full type completion. I can see all the various methods that I have access to, making it nicer to use since I know what methods are available and I'm not getting a type error.

Additionally, in Hono Pino, it is possible to customize the name of the logger. You could call it my logger or create multiple middlewares as needed.

=> 00:21:48

Embrace the power of environment variables to streamline your logging and configuration—keep your production clean while debugging with ease.

Let's discuss the Pino logger. Here, I'm going to say logger is an instance of Pino logger, and I'll import that type from hono Pino. Now, if I use this app bindings type when I access logger, it's going to be of type Pino logger. The way that we use this type is by passing it in as a generic whenever we create the app instance. By passing in my app bindings, I can see that the type error goes away, and now I get full type completion. I can see all of the various methods that I have access to, making it a bit nicer to use because I actually know what methods are available, and I'm not getting a type error.

In hono Pino, it's actually possible to customize the name of the logger. For instance, you could call it my logger, or you could create multiple middlewares that all have a different logger name. However, for my application, we're going to use the same logger everywhere, so I don't need to customize the context key here; I'm just going to use the default name of logger.

Now, one more setting I want to enable for the Pino logger is to set the log level. Pino actually accepts some options, and inside of those options, you have the option for level. This can be any one of several levels, and by default, it's set to info. This is nice because, by default, if you have debug logs, they're not automatically going to show up unless you specifically set the level to debug. The levels are ordered from lowest to highest: debug, info, warn, error, fatal. Essentially, whenever I'm in development, I want to see those debug logs, so I'm going to pull the log level from process.env.

I will say process.env.log_level; you can set this to whatever you want. Essentially, I'm going to have an environment variable called log_level, and if it's not set, then we're going to just have the default of info. Whenever the log level is set, it will use that specific log level. For instance, if I were to do a debug log now, that's a debug log. By default, I don't have an ENV set up right now, so the level is just going to be info. If we make another request to our app and look at the logs, I'm not going to see that debug log. However, if I start up the app and set the log level to something like debug, then I will see it.

When I make a request to my app, you're going to see that debug log logged out. This is nice because you can leave these logs in your code, and in production, you won't have a super busy output. But if, for whatever reason, something is going wrong and you can't figure it out, you can enable these debug logs, and then you'll be able to see those logs to try and figure out what's going on, even if the code is running in production.

At this point, I actually want to set up an environment variable file, so let's do just that. First, I'll create a file; we'll call it .env. In here, I'm going to set the NODE_ENV to development, and I'm also going to set the log_level to debug. Additionally, we'll set up the port; I'm going to put this on a different port, like 9999. I actually want to use this port variable as well. If we look in our index.ts, right now it's just hard-coded as 3000, but we could also pull in from process.env.PORT. If that's not set up, then we can use port 3000. In this case, I'm just going to parse it as a number because the serve function here expects that to be a number.

I'm using environment variables in several places: we've got the port set up here, and then we're using the log level and the NODE_ENV inside of here. Now, let's actually get these ENV variables loaded in. For that, I'm going to use a package called dotenv. I will install dotenv. The other thing is that sometimes I like to use environment variables that require expansion, meaning I want to have environment variables embedded and referenced inside of other environment variables. For that, we actually need a secondary package called dotenv-expand, so I'll install those two things and then set it up.

I'll set this up inside of app.ts right now. We're going to change where it is in a second; I just want to make sure that it's working. If we import config from dotenv, we can do that, and then we also need to call expand. I'll import expand from dotenv-expand like that, and then we can pass config into expand. Now, whenever the app starts up, this will automatically read in that .env file that I have specified here in the folder. It's going to pull in the port number, the log level, and everything else.

Now, if I kill the app, you can see it restarted. It's actually using port 9999 because it read in that .env file.

=> 00:25:30

Create a type-safe environment setup in your app to ensure your variables are always defined and validated, keeping your code clean and error-free.

To manage environment variables in our application, we need to install a secondary package called EnV. I will install both EnV and its dependencies, and then we will set it up. For now, I will configure this inside of app.ts. We will change the location later, but I want to ensure that it is working first.

To begin, we will import config from EnV. Next, we need to call expand, so I will import expand from EnV as well. We can then pass config into expand. This setup ensures that whenever the app starts, it will automatically read the specified .env file from the folder. It will pull in the port number, log level, and other necessary values.

Now, if I kill the app, you can see that it restarts and is using Port 9999 because it read that from the .env file. At this point, I want to use these environment variables in a type-safe way. You can already see that I have this linter error indicating that you should not use process.env anywhere in your application. The reason for this rule is that every app requiring environment variables should access them through a type-safe EnV.

To achieve this, we will set up a TypeScript type that provides access to these variables and ensures that the app will not start if the required variables are not set. For this purpose, we will use Zod, a schema validator. We will already be using Zod for validating the schemas of all our endpoints, so it makes sense to use it for our environment variables as well.

I like to create a file called env.ts for this setup. Inside this file, we will import Z from Zod. We should already have Zod installed because, earlier, when we installed hono, it included Zod as a dependency. If you haven't done that, make sure to install Zod.

Now, I can create my env schema as an object. I want to specify all the environment variables that need to be set or provide default values. For instance, I will declare that NODE_ENV is a string with a default value of development. If someone doesn't set it, it will default to development.

Next, I will define PORT as a number using z.co.number. This will read it in as a string and convert it to a number, allowing us to use it as a number, which it is. We will also set the default value for PORT to 9999. Lastly, I will set up LOG_LEVEL using z.enum. For this, I will import level from Pino. Since level is a type, we cannot pass it directly, but we can reference it to see its definition.

Now, I will pass in an array for the log levels and change the items to commas. Although this is a bit of duplication, we had to pull it out of the Pino library. However, this setup will give me good type completion when writing code to check the log level.

Additionally, I will move our env code into this file so that everything is handled in one place. I will grab config and expand and include them here. I also want to grab the parsing logic and include it in this file. Now, we will read in the .env, define our schema, and then parse it. I will create a variable called env and set it equal to envSchema.parse while passing in process.env. This is the only place in my entire application where I will access process.env; elsewhere, I will use this type directly.

You can see that if I hover over this, it has a very specific type, which I will use wherever I need my environment variables. I will save this and disable the linter error here, allowing us to use process.env only in this instance. I will then export this as the default option, using export default env.

Next, I will search our codebase for instances of process.env and replace them with our type-safe env. For example, where I previously used process.env.PORT, I will now use env.PORT and import it from env. Since I am setting a default option in my schema, I do not need to parse it again or provide a default option here.

Now that the file is good to go, I will also check another area where I was using process.env inside the Pino logger.

=> 00:29:16

Type safety in your environment variables can save you from runtime errors and make debugging a breeze.

The code has a very specific type, and this is what I'll use anywhere where I need my environment variables. So, I'll save this and disable the linter error here. We are allowed to use process.env right here, but we don't want to use it anywhere else. Then, I'm going to export this as the default option, so I will use export default env just like this.

Now, let's search our codebase for process.env and get rid of all those instances to ensure that we're using our type-safe env. For example, when I'm using process.env.port, instead, I want to use env.port and I'll import that from env. Now, I don't need to parse it, and I also don't need to have a default option here because I'm actually setting a default option in my schema. So, that file is good to go.

Another place where I was doing it was inside the Pino logger. Here, I actually want to access env like so, and we'll import that from env. Now, I'm accessing my env in a type-safe way. However, let's see what happens if one of these variables isn't defined. Currently, log level does not have a default value, so let's remove it from our env. If we head over to our env and then remove log level, we can kill the app and start it back up. You can see that it actually spits out an error because we're parsing that env whenever the app starts up. This actually kills the app and logs out this Zod error.

To clean that up a little bit, inside of our env file, I'm going to get the type of our env schema. You can do that with z.infer. So, I'm going to export a type called Env, setting that equal to z.infer and passing in the type of env schema. This is a built-in way inside of Zod that allows us to get that type out, and I can just use that type directly.

The next step is to define the variable. First, we'll say let env, which is of type Env, and then we'll overwrite it here. The reason I want to do this is that this can throw an error, but if it doesn't throw an error, then that means env is going to be of the right type, so we should be good there. I will have to disable some linter errors, so we're not going to do mutable exports, and we're also going to allow a redeclaration. Now, the exported type is of type Env.

Additionally, I'll wrap this inside of a try-catch block because if it throws, we can log the error out in a nice way. We'll catch this and call it e, knowing that this is a type of Zod error. I can say as ZodError, which comes from Zod. There's a built-in method on the Zod error called flatten, which gives us a nice readable error. We will log that out, so above this, I'll just do console.error and add a nice little X emoji that says "invalid env." Then, we'll also log out the errors here by passing error.flatten into that. Lastly, we'll exit the process, so I'll just say process.exit. If we give an exit code of one, that means there was an error, and now the app will not start up, providing a nice error logged out.

Now, if I try to start up the app, you'll see that it says "invalid env" and tells us specifically what the issue was. In this case, I just want field errors, so I'll update the log to say flatten and then grab those field errors. Now, it tells us explicitly what we're missing. If I head back over to my env and add a log level, let's say I set it to debug, I can save that, and if I restart the app, it should actually start up. This is a nice way to do things because now, if any of those required environment variables are missing, the application won't even start up. This makes it much easier to debug, even in production, if you're missing an env variable because the app literally won't start, and you'll see in your logs exactly which environment variables are missing.

From this point, anytime I need another environment variable like a database URL or an auth secret, I will add it specifically to my schema to ensure that we're using it in a type-safe way.

Another middleware I want to add is to serve a favicon. By default, anytime a web browser makes a request to our server, it always requests /favicon.ico, and our server responds with a 404 because that file does not exist. However, inside of my stoker package, I have middleware called serveEmojiFavicon. You can just pass in an emoji, and now anytime a request comes in for /favicon.ico, it will respond with an SVG that has that emoji inside of it. So, I'll set that up right here before the Pino logger. If I say app.use, I can say serveEmojiFavicon, bringing that in from stoker.

=> 00:33:04

Transform your server's favicon into a unique emoji response and streamline your app setup with reusable functions for cleaner code and easier testing.

In this section, I will add some enhancements to my schema to ensure that we're using it in a type-safe way. Additionally, I want to introduce another middleware to set up a favicon. By default, whenever a web browser makes a request to our server, it requests the favicon at the path /favicon.ico, and our server currently responds with a 404 error because that file does not exist.

However, within my Stoker package, I have a middleware called serve Emoji favicon. This middleware allows me to pass in an emoji, so that whenever a request comes in for /favicon.ico, it will respond with an SVG that contains that emoji. I will set this up right before the Pino logger. To do this, I will use app.use to call serve Emoji favicon, importing it from Stoker middlewares, and then pass in a specific emoji. Since this is a task list API, I will use the pencil and paper emoji. Now, with this middleware in place, any request for /favicon.ico will automatically respond with that specific emoji. If I navigate directly to /favicon.ico, I will see a giant emoji, and now whenever I visit the site in my browser, that emoji will also appear in the URL bar.

At this point, everything is looking pretty good. We have all of our default middlewares, our error handlers, and all of our custom types set up. However, I want to clean things up a bit before we start adding our nested routes. One thing I plan to do is take all the logic that sets up our app—such as adding these middlewares and the not-found and error handlers—and encapsulate it into a reusable function. This will be beneficial later when we are testing our nested routes, as we will have our logger middleware and everything else available when testing our routes in isolation.

To achieve this, I will create a new folder called lib in the source directory, which stands for library. Inside this folder, I will create a file called createApp.ts. Essentially, I will transfer everything from app.ts into this new file but will wrap it in a function. I don't need the handlers or the slash route; I only want the main setup. Therefore, I will encapsulate this in a function called createApp, and then return the app from this function. This will be exported as the default from this file. To update the import, I can use the import alias we set up earlier, so it will know to import from the source directory in the middlewares folder.

After making these changes, I will return to app.ts and remove all the setup code we previously established. I will eliminate all the unused imports, and now I can simply call createApp using the import alias. This gives me a nice reusable function that includes all of my middlewares and everything else, which will be useful for testing later on.

Furthermore, I want to reorganize the app bindings because I may need to use them in multiple places. Therefore, I will create a file called types.ts and move the app bindings there. I will export this from app bindings and import it from hono. Consequently, I will need to import this from types in my main file and remove any unused imports.

Additionally, I like to disable strict mode for my Hono apps. According to the documentation, by default, Hono differentiates between /hello and /hello/, which I do not want for my applications. For instance, if I navigate to /error, it works, but /error/ returns a not found error due to the trailing slash. Therefore, when I set up the Hono app, I will pass in strict: false. This way, when I request /error/, it will resolve to the actual error route as expected.

Next, I will set up OpenAPI with our application. I will remove all the example routes since they are unnecessary and create a function called configureOpenAPI. In this function, I want to pass in my Hono app and add the /docs endpoint, which will respond with the OpenAPI specification at that path. Afterward, I will set up our documentation page that utilizes those docs, but we will handle that in the following steps.

=> 00:36:42

Always keep your API routes clean and organized to avoid confusion and errors.

In this discussion, we address the issue of a slash that actually returns a not found error, which occurs due to the trailing slash. To resolve this, whenever I set up the Hono app, I will pass in strict as false. This adjustment allows requests to error slash to resolve to the actual error route as expected.

Next, we will set up Open API with our application. I will remove all of the example routes since they are unnecessary and create a function called configure open API. The purpose of this function is to pass in my Hono app and add the slash do endpoint, which will respond with the Open API specification at SL do. Following that, we will also set up our documentation page that utilizes those docs, all of which will be done within this function.

To begin, I will take in the app. To obtain the type of the app, we can use the same type that we use to instantiate the app. However, I will define this inside of types to ensure it is reusable, as we will use it in other places as well. I will export a type called app open API and set it equal to this specific type. Consequently, anywhere I need to use this type, which is specifically an Open API Hono instance with these specific bindings, I can simply use my type called app open API.

Now that I have defined that type, I will use it right here, specifying that this will be a type of app open API. From there, we can call our documentation function. I will grab this and call it, updating the title to just be the tasks API. For the version, I want this to stay in sync with the version of my npm repository. First, I will define the version in our package.json, which is a standard field. I will set this to 1.0.0 and then import that package.json directly into this file to use the version field. This way, anytime we bump the version, this documentation endpoint will always reflect the latest version specified in our package.json.

I will import our package.json from a few directories up and then grab that package.json. Right here, I can simply say package.json dovers. Now, I need to use this function after creating the application. If we head back over to our app.ts, after creating the app, we want to call configure open API and pass in the app. This means the app gets instantiated, adds all of our middlewares, and then sets up our documentation endpoint.

If we navigate to the browser and go to SL do, we should see our specification. Currently, we don’t have any endpoints, so it appears as a blank default specification. However, you can see it is bringing in that version number from our package.json. As we add any documented routes, those will be listed here as well.

To see this documentation in action, let’s create an endpoint. I will create a folder called routes and inside of that, I will create a root route named index.route.ts. This file will export a Hono API instance that has a specific documented route. The first step is to create an Open API Hono instance.

Recall that in our create app function, we have this create app method, but I want to create a Hono instance that does not include all of these middlewares; it will be standalone and then mounted on top of this app. Therefore, I will define another function called create router, which will simply return this instance. Anytime I need a new router, it will have the correct typings, and I can just call this function.

We can reuse this function inside of here by saying create router. However, I cannot have two default exports, so I will remove the default from here. Now, this file exports two things: it exports create app as default and also exports the create router function that we can use.

With that established, if we head to the index.route file, I can create my router by saying create router and then export this as default. Now, I can create an endpoint. Referring to the Zod Open API documentation, they provide a built-in method called open API, where we specify our route definition. This is what will be documented whenever we hit the slash do endpoint, and we will also pass in our handler or implementation. Let’s proceed to implement that right here by saying open API and then using the create route function, which is built into Hono's Open API, passing in my options where the method is...

=> 00:40:15

Document your API routes clearly and implement them with type safety to ensure smooth development and error handling.

To create an application, we start by defining a create app function, which also exports this create router function that we can utilize. With this setup, if we navigate to the index.route file, I can create my router by invoking create router and then exporting it as default.

Now, I can actually create an endpoint. If we refer to the Zod Open API documentation, we can see that they provide a built-in method called open API. In this method, we specify our route definition, which will be documented whenever we hit the slash endpoint. Additionally, we pass in our Handler or implementation.

Let's implement this. I will invoke open API and use the create route function, which is built into hos.open API. I will pass in my options; in this case, the method will be GET and the path will be slash. Next, we can document how this endpoint will respond. Here, we can add responses, which is an object where the keys represent the HTTP status codes. For a successful response, the HTTP status code will be 200.

We can specify how this endpoint will respond when it returns a 200 status code. We can pass in our options, setting the content to application/json. Within that, we can define our schema, which is a Zod validator that specifies the response structure. I will create a simple object schema with a message property. To do this, I will pull in Z and use Z.doob, ensuring that I import it from hono zad open API.

Now, I will pass in my options, indicating that it has a message property, which will be a string. Lastly, I need to add a description property, stating that this is the tasks API index. This describes my endpoint, detailing the method, path, and response structure.

Next, we need to implement the endpoint. The second argument to open API is the Handler that will adhere to this specific contract. Here, we can pass in our Handler, which is just a standard hono Handler. However, it will be typed to ensure that we respond correctly. We will receive the context and can write our implementation.

For more complex routes, we will query our databases and perform necessary actions. In this case, we simply need to respond with an object that has a message property. I will return c.Json and pass this in. If I do control space, we get type completion, indicating that there should be a message property. This is because the types are flowing through; I defined my contract, stating that we will respond with an object containing a message property.

Now, I will specify that this is the tasks API. Essentially, every route we define within our application will follow this pattern: we will document it and then implement it. A significant advantage of this approach is that if I respond with an incorrect status code, such as 401, I will receive a type error. The error will indicate that I only specified a 200 status code, and I need to document that correctly.

If I wanted to handle other status codes, I could specify that sometimes a 401 response might occur and define what that response would look like. For now, this setup should be sufficient, and we have our root endpoint established.

Next, I need to mount this specific endpoint onto our main application. I will do this in app.ts by creating a variable called routes, which will be an array containing all the routers defined inside the routes folder. I will import the index route manually, using the syntax import index from atrout index.route. While doing this, I can also update it to use my Alias import.

Now that I have this array of routes, I can iterate over it. After configuring open API with the base application, I can mount each of my individual sub-routers. I will use routes.forEach and for each route, I will specify the root path and pass in the specific route. This should successfully create a route at the index, and it should also show up inside of the application.

=> 00:43:44

Building your API with open standards means effortless documentation and interactive testing at your fingertips.

In this section, we will discuss the process of defining and managing routes within our application. First, we need to ensure that all of the routers that get defined inside of this routes folder are properly imported. Right here, I can pull in the index route and then I’m just going to have to import it manually. So, we will import it as follows: import index from atrout index.Route. While I’m at it, I can also update this to use my Alias import.

Now that I have this array of routes, I can actually iterate over it. After configuring open API with the base application, I can mount each of my individual sub-routers. Here, we will use routes.forEach, and for each route, we will specify the root path with app.route, passing that specific route in like so. This should effectively give us a route at the index, and it should also show up inside of our documentation. If we refresh our documentation, we should see that we now have this endpoint documented, stating that if you go to slash, we will respond with a GET route, and the possible responses are potentially an object that has a message property that is of type string. Therefore, if we go to the index route, we will receive back that specific object.

Next, I want to set up some interactive documentation. This specification is great because it literally describes all of my endpoints. However, since this is an open API specification, it adheres to a set standard. If you describe your APIs in this way, there is a whole ecosystem of tools that can create interactive documentation or automatically generate client libraries in pretty much any programming language. This is really awesome, and one of the things we are going to set up is interactive documentation.

You have probably seen a UI that looks like this before; this is Swagger UI, which is based on top of the open API specification. If I grab my specification here and copy that JSON, I can go to the Swagger editor and paste it in. I’m not going to convert it to YAML, but you can see that we get interactive documentation over on the right-hand side. We can view the specific schema and our endpoints, and it even gives us the ability to try it out. However, this won’t work because we haven’t set the root URL.

One of the great things about having the specification is that it allows for such tools. While Swagger UI is excellent, there is a newer tool called Scaler, which is really slick. They do have a paid hosted platform, but the actual Scaler docs that you can add to your application are completely open source, and you can host it yourself. If we take our specification and paste it into their interactive editor, we will be able to get documentation that specifies all of our endpoints. It will show all of the documented responses and schemas, and we can also click to test. This is very similar to tools like Postman or Thunder Client for testing APIs, where you can specify headers and the body for post requests, all within the web browser.

Setting this up is straightforward; you just need an HTML file that pulls in the library from a CDN and points it at your specific documentation endpoint. Fortunately, we don’t have to do it manually. There is a package called Scaler Hono API Reference, which we can add directly to our app, instantly setting up that documentation endpoint. Let’s proceed with this by installing the library. After installation, we will grab this and call it inside our configure open API function, right after we set up SL doc. I will also import API reference from there.

Now, you can see they are specifying the URL, which just needs to point to where your documentation lives. In our case, it lives at /doc, so we will update this to be /doc. Just like that, we should have some interactive documentation ready to go at slash reference. However, when I head back over to our API and go to SL reference, it breaks. I think I know what I forgot; we have been building our app this whole time while still in CommonJS mode.

If we take a look at our package.json, we see that we don’t have type: module set, which we should because we are using ECMAScript modules and TypeScript. We actually need to set type to module. If we set that and then restart, it should be good to go. We should have set that up earlier to ensure that we are importing everything as ECMAScript modules, which is the code we are writing.

Now that I have that set up, if I head to SL reference, here we go! We have our fully documented endpoints. I can see all of the routes on the left, and I can click on them to explore further.

=> 00:47:12

To build a robust API, always set your module type correctly and handle validation errors consistently to avoid messy code.

API and go to SL reference it breaks. Okay, I think I know what I forgot. We've been building our app this whole time, and we're actually still in CommonJS mode. If we take a look at our package.json, we don't have type set to module, and we should because we're using ECMAScript modules. We're using TypeScript, so we actually do need to set type to module. If we set that and then restart, it should be good to go. We should have set that up earlier, but essentially we want to make sure that we're importing everything as ECMAScript modules because that's the code that we're writing.

Now that I have that, if I head to SL reference, here we go! We have our fully documented endpoints. You can see all of the routes on the left. I can click on them and then click on test, which allows me to change the headers, the body, etc. We can actually test out our API. So now, every endpoint I create in this way is going to automatically get listed here, and then I can test it in an interactive way.

The library also makes some options available, like the theme. They have several different themes that you can use. I actually like the theme that is called Kepler, so I'm going to set that here. I'm going to set the theme to Kepler, and with that set, it looks a whole lot better. That's awesome! You can try out all the themes and pick the one that you like. There are actually more options available, so if you head over to the main Scaler repo, they have a section under themes and a section for configuration, which shows more stuff that we can customize.

Specifically, if you look at layout, there are two layouts available: the modern layout, which is two columns, and the classic layout. I actually prefer the classic layout, so let me show you what that looks like. If we set the layout to Classic and then head back over to the docs, it now shows everything top-down. This is just my preference, but I do like to see all of my endpoints here. Once you have a lot of endpoints, even though it's in a single column, you can still use the search box here to filter it down. So yeah, I like the classic layout; choose whichever one you want.

Now, the other thing is we can set the default client library. This is one of the cool things about this test client here: whenever you look at the docs, depending on what you have your test client set as, it'll show the actual code for that. If I choose Node.js, it'll show me how I might make this request in Node or in Python or in JavaScript fetch. So whatever your client language is going to be, you can actually set that as the default, and that's what I'm going to do next.

If you look in the docs and go to the configuration section, they have a thing called default HTTP client. Here, you can choose the target key and the client key. I'm going to do just that. I'm going to set the target key as JavaScript and then the client key as Fetch. So then, anytime I go to my docs, it'll always show me the Fetch and JavaScript examples instead of curl. Here, we can say the default HTTP client, and then we need to specify the target key, which will be JavaScript, and the client key, which will be Fetch. With that, now anytime I go to this page, it's always going to have JavaScript fetch set as the default. So when I look at the documentation for any of my endpoints, I'm always going to see the fetch example here. This looks good to me.

Now, I want to specify some more options whenever setting up our Open API application instance. You can see in the Zod Open API docs that whenever you create an instance, you can actually pass in the default hook. Essentially, if there's a validation error in any one of your endpoints, we can handle it in a common place. Without this default hook, you actually pass in a third argument whenever you're specifying an Open API endpoint. But every single one of my endpoints is going to be using Zod validators, so I'm going to respond in a consistent way and avoid having a bunch of duplicate code for every single route handler.

Whenever we create our instance, we need to pass in that default hook. I actually have a default hook specified in the Stoker Library. If you want to use this default hook, look in the repo, go into Open API, and then there's a file called default hook. Essentially, if the result was not a success, meaning there was a validation error, then it's going to respond with success false and include the full Zod error. I like including the Zod errors in my API responses because they're pretty easy to iterate over in client-side code and then potentially individually show any errors that pop up. This basically returns the full Zod error and responds with a 422 status code.

=> 00:50:27

Streamline your API development by using default hooks and helper functions to eliminate repetitive code and enhance error handling.

To ensure that we respond in a consistent way, I have implemented a strategy to avoid having a bunch of duplicate code for every single route handler. Whenever we create our instance, we need to pass in a default hook. I actually have a default hook specified in the stoker Library. If you want to use this default hook, you can look in the repository, go into the open API section, and find a file called default hook. Essentially, if the result was not a success—meaning there was a validation error—it will respond with success false and include the full Zod error.

I like including the Zod errors in my API responses because they are pretty easy to iterate over in client-side code, allowing us to potentially show any errors that pop up individually. This setup returns the full Zod error and responds with a 422 status code. In my application, whenever I create the app, I will actually pass that default hook in. Right here, I can say the default hook and then specify it. Of course, you don't have to use the one from stoker; you can specify your own default hook. However, we can pull that directly from stoker, ensuring that if there’s any sort of validation error in any of my defined endpoints, it will always respond in the exact same way.

Before we start defining some of our other endpoints, I want to show you some of the other helpers that I have built into my stoker Library. I will be using them as we build the application. One of these helpers is called Json content. When defining these open API endpoints, especially if you are creating a pure JSON API, you often need to set the content to application/json, pass in the schema, and provide a description. Instead of doing this repeatedly, this helper allows you to pass in the schema and the description, returning an object that looks like that.

We can use this helper right here by grabbing our schema and setting it as Json content. This gets pulled in from our helpers, allowing us to pass in the schema and then the description, which helps us eliminate all this boilerplate code. Now, this cleans things up a bit, so anywhere else where I need to respond with JSON content, I don’t have to constantly specify application/json; I can just use that helper from stoker.

Another helper that I like to use is for HTTP status codes. Built into stoker are the status codes, which come directly from the HTTP status codes Library. This library is well-established, with over 2 million weekly downloads, and is used in most production backend APIs built with JavaScript or TypeScript. However, the type system in Zod and Hono API doesn’t work very well with it because it exports enums. My library exports the same things but just as const, making them compatible with the Hono Zod open API type system and the Hono built-in status code type.

For instance, we can pull in status codes by saying import * as HTTP status codes from stoker HTTP status codes. We can then use it and apply a computed property. Instead of using 200, we can simply say status codes.ok. This approach is more readable, allowing us to see specifically what the status code name is and how it will respond. You can use those status codes whenever you are doing JSON as well, so we could respond with okay right here. The types still work; for example, if I were to respond with 401, I would get a type error because it needs to specifically match any of the status codes that you specify.

In my entire codebase, if I am using a status code, I prefer to use these HTTP status code consts. Another built-in helper inside of stoker is called create message object schema. I find myself needing to create this schema repeatedly, especially when I have an endpoint that responds with an object containing a message property. Instead of redefining this schema in every single spot, this helper creates it for me, allowing me to pass in the example message that gets generated.

For example, instead of defining an object with a message property schema, I can simply say create message object schema and pass in the example string, such as tasks API. Now, I don’t have to define that schema everywhere, which is why I created that helper. If we look at the documentation, we should see that the example includes the example text we passed in. Whenever we test it out, we can confirm that it responds in the right way.

=> 00:54:01

Streamline your code by creating reusable schemas and grouping routes for cleaner documentation and easier management.

In the process of creating an API, I developed a helper function that responds with an object that has a message property. Instead of redefining this schema in every single spot, this helper essentially creates that object, allowing me to pass in the example message that gets generated. For instance, instead of defining an object with a message property schema, I can simply say create message object schema and pass in the example string, which in this case is "tasks API." This approach eliminates the need to define that schema everywhere, as I found myself repeatedly redefining it, prompting the creation of this helper.

Now, when we look at the documentation, we should see that the example includes the example text we passed in. Additionally, whenever we test it out, we can observe that it responds correctly. Another important aspect to note when defining these endpoints is the ability to set up tags, which are used to group things together. In the create route, we can pass in a tags array, specifying any group that it should appear in within the documentation. For this instance, I will label it as "index," since this is akin to the index route. By adding that tag, it will now be grouped specifically under the index route, rather than being labeled as default. Consequently, for every router I define, if I want those routes to be grouped together in the documentation, I will simply include that same tag, and the documentation will automatically group them.

At this point, we are all set up. We have our middlewares, our error handlers, our docs being automatically generated, and our reference UI ready for use. Now, we can begin writing the application code. I prefer to give each new route its own folder inside the routes folder. Since we are specifically building a tasks API, I will create a folder named "tasks." Within this folder, I will create a file called tasks.index.ts to establish our router, which we will then export as default. It is also essential to register this router; therefore, I will navigate to the app and add it to the list of routes, importing it from SL routes SL tasks SL tasks.index. Because it exists in this array, it will be automatically mounted.

Currently, the router is completely empty. If we refresh our documentation, we will not see any additional endpoints, as we have not created any yet. To address this, I prefer to define my routes in a separate file when dealing with multiple endpoints. Thus, I will create a file named tasks.routes.ts, which will export all of the route definitions that serve as the contracts we will implement inside the handlers. For this tasks API, the root at "tasks" will be a list request, aimed at retrieving all of the tasks within the system. I will create a definition that describes our response as an array of tasks.

To implement this, I will create our list handler using create route, which comes from Hono Open API. I will specify the path as /tasks and the method as GET. Next, we can define our responses. For the 200 status code, I will import the HTTP status codes helper from Stoker CTB status codes and utilize the 200 status code. A successful response will return a JSON array. Again, I will use the helper method JSON content and pass in the schema.

While we plan to set up a database and utilize a Zod schema that corresponds directly to a table from our database, for now, I will define a dummy schema to see this working. I will indicate that the response will be an array of objects. Since these are tasks, each object will have a name property, represented as z.string, and a done property, represented as z.boolean. This is a temporary hardcoding, as we will establish a real database and pull the schema from there later. For now, this setup will suffice to implement our request handler. Additionally, I will include a description stating, "this is the list of tasks." With the route definition complete, I am now ready to define the actual handler that will implement the logic to respond.

=> 00:57:29

Streamlining your code is key; create reusable types to simplify your handlers and keep your app organized.

Then we'll set up the database, and we'll get like the real schema in here. However, in this case, I'm just going to say that this responds with an array. I can grab Z from Hono Open API, and then I can say that it's going to respond with an array of objects. Because these are tasks, let's say each one has a name, so let's just do a Z.string. Then it has a done property, which is like Z.b. Like I said, we're just hardcoding this for now; we are going to set up a real database and then pull the schema from there. But this should be good so that way we can implement our request handler.

We also need to pass in a description, so this is the list of tasks. Now that I have the route definition, I want to define the actual handler that's going to implement the logic to respond with this array of tasks. For this, I like to have a file called handlers/tasks.handlers.ts, and then I can define my request handlers inside of here. I'll have a request handler called list, and this is essentially going to be a handler that gets access to the context. But we need to make sure we put a type on this thing.

Now, there is a built-in type called route handler, and so we can pull that in. This is a generic type, so if you look at it, the first argument is a route config type. Essentially, what we need to do is grab the type of this route and then pass it into the route handler here. From this file, I'm going to export a type called list route, and that's just going to be equal to type of list. If we look at this, this is now a type that defines the specific route, and now we want to use it when we're defining our handler.

So if I head back over here, I can then pass in the list route. We'll import it from that file, and now we're getting a type error. It's like, "Hey, you're responding with the wrong thing because you need to respond with an array of tasks." So let's do that. We're going to return just c.json, and here I'll just have some example tasks. For instance, the name will be learn Hono, and done will be false. At this point, our type errors should be fixed. We just have a linter error that says this variable is unused. Great! If I just add an export, that fixes the linter error.

However, check this out: if I respond with the wrong type, we get a type error, and it's like, "Hey, you're missing a property; these types are incompatible." So, the property done is missing. Now that we've set up the types in this way, we are ensured that we are actually responding in the correct way based on our type definitions and schemas. So we'll do that, and that looks good.

Now, one more thing we need to do is also pass in the app bindings. You can see the second generic argument here is Env, and this is actually the app bindings that we passed into the Open API app that we created earlier. For instance, if I wanted to access vs.logger, I don't have access to that right now. You can see that I'm getting a type error. To make sure that I don't get a type error whenever I'm accessing logger or any other variables that I set up, I also need to pass in my app bindings here.

If I pull in app bindings from my types, now we should get type completion. So if I do C.V.var.logger, we get access to it. However, this is a little bit cumbersome because I don't want to have to define this type every single time I have a handler. I'm going to have a bunch of handlers inside of my app. So, we can create a utility type, and this is going to require some generics, but I'll walk you through it; it's going to be just fine.

I'll define it inline so you can see what's happening, and then we'll pull it out to our types file. Basically, I want a type that'll let us do this but not have to pass in the app bindings. If I create a type, let's call it app route handler, we need one thing: we need a route config that needs to be passed in. Here, I can say we'll take in R, which is a generic type parameter, and we want to say that this must be a route config. By saying extends, the type that is passed in must have all the properties of a route config.

Okay, so that's our generic argument. On the right-hand side, we basically want to do this, but we'll use this generic type here. Instead of hardcoding list route, we'll just pass in R. Now we have this nice reusable type where all we have to do is pass in the route config type, and it automatically adds those app bindings on. So, in this case, I can do this; I don't have to pass in app bindings, and now the types flow through. Now I can say C.V.logger, and I still get access to that.

This is really nice because now any route handler I define in this file, all I have to do is pass in the route config, and then anywhere else in my app as well. So let's actually extract this out to our types file so we can...

=> 01:01:02

Simplify your route handling by using generics to automatically manage type bindings, making your code cleaner and more efficient.

In this discussion, we start by noting that the type that is passed in must have all the properties of a route config. This is our generic argument. On the right-hand side, we want to implement this by using the generic type. Instead of hardcoding list route, we will simply pass in R. This allows us to create a more readable type where all we need to do is pass in the route config type, and it will automatically add those app bindings.

For example, I can do this without needing to pass in app bindings, and now the types flow through seamlessly. Consequently, I can say c.v. logger and still have access to that functionality. This is particularly advantageous because now any route handler I define in this file only requires passing in the route config, which can be utilized anywhere else in my application as well.

Next, we should extract this into our types file for reuse. I will navigate to the types file, add an export at the bottom, and import the necessary components from Zod and Open API. Now, I can import this from my types file, ensuring everything looks good. At this point, I will remove all unused imports to keep the code clean.

Now that we have our route handler and route definition, we need to combine them. Looking back at our route definitions, we need to export this specific route definition to import it into the main router. We have our route config defined, and our handler defined, so we will integrate them inside our router. I will import all of my handlers by saying import Star as handlers from t.handlers and import Star as routes from pass.routes.

Using the built-in Open API method, I will first specify the route config by saying routes.list. After that, I will specify the handler, linking it to handlers.list. If we have correctly defined our types, we should see that there are no type errors, as our route definition and handler should match up correctly.

At this point, we should be good to go. If we head back to our API reference and refresh, we can see our task endpoint and test it out. When I send this request, we receive back an array containing an object, which is great.

One additional step we should take is to give this group a name. We can call this group tasks within the route definitions. I will create a variable called Tags that holds the name of the group and pass this into the route definitions. This way, any route I define in this file will consistently include those same tags, ensuring that when we look at our documentation, it will always be grouped under tasks.

Next, instead of responding with dummy data, we should set up our database. For this, we will use drizzle ORM along with SQLite. In the documentation, I will navigate to the SQLite section. The lib SQL client provided by Turo can communicate with a file on your local machine. Thus, during development, we can connect to a SQLite database residing in the same folder. When we deploy, we can create a Turo instance and set our environment variable to that endpoint. For now, we will set it up for local use.

We will need to install all the necessary dependencies and then set up our database instance. I will copy the relevant code and create a folder called DB. Inside this folder, I will create an index.ts file that will serve as our database connection. We will import all required components, create the client, and export the database instance as default.

From here, we will configure these as environment variables. We will specify database URL, which will be a string, and we can verify that it is a valid URL using z.string. I want the database auth token to be optional because I do not want to specify it when running locally. Therefore, I will set this to optional.

However, we can utilize the refine method for further validation. Essentially, we will access the Pard EnV and state that if the node EnV is production, then the database auth token cannot be undefined or empty. We will implement a check to ensure that if input.node EnV equals production, then we will validate whether input.database auth token is defined, returning true or false accordingly.

=> 01:04:45

In production, always validate your environment variables to avoid unexpected errors.

We are going to specify the database URL, which is a string. We can also verify that it is a valid URL, like so, and this will be a z.string. Now, I do want the database auth token to be optional because I don't want to specify it when I'm running locally. So, I'm going to set this to optional.

What we can do is actually use the refine method, which allows us to do further validation. Essentially, what we can say is that we'll call refine here, and we'll get access to the Pard EnV. We will say if the node EnV is production, then the database auth token cannot be undefined or empty. Thus, we will check if input.node EnV is equal to production. This method returns true or false in terms of whether this is valid or not, and we want to return if input.database auth token is defined. So, we will say not not right there. Otherwise, we are just going to return true.

When we're running locally, we don't need to specify the database auth token, but when we're running in production, we do need to specify it.

Hello there, editor CJ here. I actually found that there's a better way to handle this error to ensure we get a good error message if we are missing this environment variable. Check out the repo linked in the description. Essentially, instead of using refine, we can use super refine to add a custom error message here. If we do come across this error, we're going to see a good error message printed out in production.

Now that I have these defined, we can head back over and actually use them. Right here, I can pass in EnV; we'll import from there. We can say database URL, and then this will be env.auth token. Now, if I kill the app and restart it, you're going to see that database URL is required. So, when we're running locally, we can just set that database URL and locally we can set that to a file URL. We can simply say file and then call it dev.DB. Now, if I restart the app, the app starts up just fine.

Next, let's define our schema and define that tasks table that is going to hold all of our tasks. I'm going to copy their example table definition here and then modify it. In DB, I'm going to create another file called schema B. In here, I'm going to have a table called tasks. If we head over to the drizzle doc, we can see some examples of an auto-incrementing column under SQL light. For example, we can have an integer ID, set the mode to number, and then set primary key autoincrement true. This will basically make it so that the ID gets automatically created and incremented whenever we insert a new row.

Another thing we want is a boolean because we want to know whether or not a task is done. In SQLite, there is no Boolean type, but we can create an integer and set the mode to Boolean. With this, the property will be either zero or one—zero for false and one for true—and drizzle will automatically convert that for us. So, right here, we'll say that we want done to be called done; it's going to be a Boolean, it is not null, and the default value is false. By default, when you insert a task, done is going to be set to false.

We also need a task name. If we check out the docs, there is a text type, and that's what we'll use. We'll just say the name of this task is text, and the column name is name. We'll say this cannot be null, and then we'll also add created at and updated at columns.

Now, SQLite doesn't have a timestamp type, but we can use a text type and there are some built-in helpers. If you head to the drizzle docs, go to learn and then to tutorials, there is one on turo. Here, you can see that they have this updated at column which uses an integer mode of timestamp. They also have created at, which uses text, but I think I'm going to make both of mine integers and then use this code.

We can get rid of these and say that updated at is a timestamp, and on update, we'll create a brand new date. This is just a built-in hook that will automatically set that for us. Then, we'll also have created at, which we will call created at timestamp. For this, we can use the default method.

There’s a deprecated function called default now, and if you look at it, it's actually using this code to set the date. However, they are saying you should do this yourself because there’s no direct way to get the current time in milliseconds inside of SQLite. They only have timestamp, which gives you a string. What we can do is say default, and then, very similarly to how they’re doing SQL, we can pass in some SQL code here. We can say SQL, pull that in from drizzle, and then pass in a template string and call the casting code. Essentially, this is going to give us Unix Epoch time anytime a new row is inserted.

=> 01:08:23

Automate your database timestamps for seamless data management.

In this section, we will discuss the implementation of a built-in hook that will automatically set the created at timestamp for us. For this, we will call it created at timestamp, and we can use the default method. There is a deprecated function called default now, which, if you look at it, is actually using some code to set the date. However, they recommend that you should do this yourself because there is no direct way to get the current time in milliseconds inside of SQLite; they only have timestamp, which provides a string.

To address this, we can say default and, similar to how they are doing SQL, we will pass in some SQL code. We can say SQL, pull that in from drizzle, and then pass in a template string, calling the casting code. Essentially, this will give us Unix Epoch time anytime a new row is inserted. This looks good to me, and I will perform a little formatting here so that everything is easier to read. You can potentially pause the video to ensure that everything looks the same for you, and you can also check out the repository linked in the description. This is our database table.

Next, we will push this schema to our local SQLite database, allowing us to start querying it. We will also use Drizzle Studio to look at the database and insert values, among other tasks. To work with Drizzle kit and create some migrations, we need a drizzle config. You can see in their documentation that they have an example config. We will go ahead and define this. In our root, we will create a drizzle.config.ts export file, and we will set all this up.

The dialect here is going to be SQLite, and for our database credentials, we can pass in the URL, which will come from our environment variables. We will pull that in from our typesafe environment and pass in database URL. Additionally, we can pass in the auth token, which will be env.databaseAuthToken. We also need to ensure we are pointing at the correct schema folder. My schema is defined inside source/DB/schema.ts, so we want to specify source/DB/schema.ts. I prefer all my migrations to reside in the same folder as the rest of my database files, so I will specify that the migrations should be created in source/DB/migrations.

This config looks good, but I do have one error in my schema. If you head back over to the schema, I actually left off some parentheses when defining created at. I need an opening parenthesis there and another one here to ensure the correct SQL syntax. From here, we can generate our migrations. From the command line, if I run pnpm drizzle kit generate, this should generate a SQL file, and it does. If we take a look inside the DB, we have a migrations folder, and we can see that SQL file. That's awesome!

Now, if we run pnpm drizzle kit push, this should push our schema to the database, and we should now have a database. Since we are working locally and I defined that database URL as dev.DB, there it is! We have ourselves a little SQLite file where all our data will be written when we are coding locally. However, while I'm thinking about it, we should add this to our .gitignore because we don't want to push this SQLite file to GitHub. Inside my .gitignore, I will add dev.DB to ensure that I don't accidentally commit it. Every developer working on this app would generate and create their own file whenever they are working with it locally.

Now that it exists, we can actually start up Drizzle Kit Studio. If I run pnpm drizzle kit studio, this is their built-in web-based tool where we can look at our database. If we check it out, we have a task table, and now we can use the UI here to add some tasks. Let's create a new task where the name is learn hono and then click save. We have ourselves a row, but I notice that updated at defaults to null. I actually want to set it to the same created at date, so I will update my schema.

We have on update, but we also need to set the default. I will extract this into a variable, calling it defaultNow, and set it up like that. I will also make sure to pass it in here to say default defaultNow.

Hello again, editor CJ here. I ended up changing how I worked with the dates to just use JavaScript dates. I didn't like how the dates were ending up using that SQL timestamp code. Now, I'm using the default function and passing in a new date for both created at and updated at. Another fix is that the code I copied in had the column called update.

=> 01:12:06

Keep your database schema and API in sync for seamless development.

Hello again, editor CJ here. I ended up just changing how I worked with the dates. I decided to use JavaScript dates instead of the SQL timestamp code, which I didn't like. Here, I'm using the default function and passing in a new date for both created at and updated at. Additionally, I noticed that the code I copied had the column called update at instead of updated at, so I fixed that as well.

To keep things simple, I recommend checking out the repository and setting up your date columns to look like this. Now that I've set that up, let's go ahead and push the schema. If I run drizzle kit push, it will update the schema. Now, if I refresh, whenever we insert a row, it should automatically get an updated at column. For instance, if we add a row that says learn open API and then click save, we now get a default updated at field.

Next, I'm going to delete that other entry and add another one that just says learn. Oh no, these are our tasks, and then we'll save it. Sweet! So we have two things in the database now. Let's actually write the code that queries the database whenever we're hitting our API endpoints.

If we head back over to routes, then tasks, and then handlers, we can actually query our database. Right here, I want to grab that DB variable and import it. I specifically want to query it, so built into drizzle, you can do db.query. This is similar to an Om type of querying, where you can do things like find mini. However, for that to work, I need to let the DB know about my schema.

If we head to the DB definition, we can pass in some options, including my schema. To do this, I'll import everything from the schema file by saying import Star as SCH schema from that specific schema file. If I were to define more tables, everything that's exported from here would be included inside this variable. Now, I'm passing it into the drizzle DB instance.

Now that I've done that, if I run db.query, I get access to tasks. In this case, I'm just going to say find Min, which will give me my list of tasks. I will respond with that list and make sure to await it, making this function async. We should be good to go!

Currently, our route definition states that it will respond with an object that has a name property and a done property. Essentially, we're doing that because that's what our task schema is. Of course, we don't want these two things to be out of sync, so first, we'll ensure that all of this works. Then, I'll show you how we can actually get a Zod schema from our database and use that right here.

Now, let's give it a try! If I head back to my API docs and refresh, let's test out the /tasks endpoint. There we go! We are successfully reading data from our database, which is great. At this point, let's get those types hooked up. Drizzle has a library called drizzle Zod. If we search for it, we can see how it's set up. Once we install it, we can pass our table into it, and it will give us back a Zod validator.

Now, we can essentially go directly from a table definition to a Zod schema and then use that schema inside our Open API definition. Let's install this library, and then we'll use these functions to define our schema. Heading back to our database schema file, we want to use these functions to create our schema. First, I will create a select tasks schema by calling create select schema with our specific table tasks.

Cool! Then, we'll be sure to export that. Now we have a Zod schema that exactly describes the type coming from our database. If we use this schema inside our Open API definition, it will always stay in sync with our table definition.

So, if we head back over to our route definition, instead of hardcoding this, we can bring in our select tasks schema specifically. We'll import it from there, and just like that, our API is up to date and in sync. If we head back over to our API docs, we can see that it now has all of those properties defined, and those types are flowing through.

If we look at the show schema, it actually generates a JSON schema that represents that exact same database table. Now, it's a beautiful thing—our API docs are completely in sync!

=> 01:15:56

Sync your API with your database schema effortlessly—every change reflects in real-time, keeping your documentation accurate and your development smooth.

We have created a schema with our specific table tasks. This allows us to ensure that we can export the schema effectively. Now, we have a Zod schema that exactly describes the type coming from our database. By using this schema inside our OpenAPI definition, it will always stay in sync with our table definition.

If we head back over to our route definition, instead of hardcoding values, we can import our select tasks schema. Just like that, our API is up to date and in sync. When we check our API docs, we can see that it now has all of those properties defined, and those types are flowing through. If we look at the show schema, it actually generates a JSON schema that represents that exact same database table.

This synchronization is beneficial because if we ever add new columns, change types, or make any other modifications, our API docs will remain up to date. The validation types returned here will also stay up to date. Now, let's create a create endpoint so that we can send data to our API to create new tasks. I will copy the route definition, and it will be called "create." The path will still be "SL tasks," but in this case, the method will be POST.

After creating a task, we will respond with a single task, so we don't want to use z.array; we just want to specify the select task schema. After creating, you will receive exactly back the thing that was created in the database, and the description here will be the created task.

Next, we need to validate the incoming request body. I want to be able to send a POST request with a JSON object that has a name and a done property, and we should validate that before trying to insert it into the database. To specify this, I can define what is coming in the incoming request, particularly validating the incoming body. The way to specify this is almost identical to how we specify responses.

I have a helper built into my Stoker library that creates the same JSON content but also adds a required field. For this specific endpoint, we want to ensure that the user always specifies data in the body; if they do not, it should be invalid. This helper, called JSON content required, will create that with the required property set to true, and we can pass in the schema and the description.

We will say JSON content required, pull that in, and then specify the schema. For this schema, we can use the insert schema. If we head back over to our table definition, we can create another variable called insert tasks schema. We will say create insert schema, which comes from Drizzle Zod, and then we can pass in our table definition. Now, we have a schema that will specifically validate the incoming task.

One thing we can do is further refine the schema to correspond to the actual data coming into our API. Whenever a POST request comes in, we do not want the user to be able to explicitly set the ID, which is automatically generated by the database. We also do not want them to set created at or updated at, as those should automatically be set as well.

Using the built-in omit method from Zod, I can specify the properties that I do not want to exist on this schema. I will specifically omit ID, created at, and updated at. Now, I have a Zod validator that does not allow those properties to be passed in, but it will still validate name as a string and done as a Boolean.

We can now use this type. If we head back over to the route definition, I want to use the insert task schema. We will import that and pass in the description, which is the task to create. The route definition is now pretty much ready to go.

Let's go ahead and hook it up and see it inside the API documentation. In my tasks index, I will add another endpoint, stating OpenAPI and routes.create. For now, I will just pass in an empty request handler. I am getting a type error because I am not responding with the correct type, but that is just a type error. The app should still be able to spin up, and our documentation should still be updated; it's just that this endpoint won't actually respond with anything.

If I refresh the page, I can see that I now have a POST endpoint for SL tasks, and we can see that the expected response looks like this. If we go to test the request, we can see an example insert. This example only includes the fields that are required. Since the done property has a default value, it is actually not included here, but we can include it if needed.

=> 01:19:44

Building robust APIs means ensuring every endpoint is well-defined and validated. Don't just code; document the journey!

I'm going to add another endpoint. We'll say open API and we'll say routes.create. Right now, I'm just going to pass in an empty request handler. Now, I am getting a type error because I'm not responding with the correct type, but that is just a type error. So, the app should still be able to spin up, and our documentation should still be updated. It's just that that endpoint won't actually respond with anything.

If I refresh the page, you can see that I now have a POST endpoint for SL tasks, and then we can see that the expected response looks like this. If we go to test the request, we can see an example insert. This example is only including the fields that are required. Now, because the done property has a default value, it's actually not included here, but we can include it. So, if we head back over to our insert schema, we can say do required. This is built into Zod, and we'll pass in an object, saying that we want that done property to be required like that.

Now, if I refresh the docs here, whenever we look at it, we should see that a done property is specified for us. So, let's actually implement that request handler. If we head to our handlers, we now want a handler called create, and this is going to be a handler of type app route handler create route. We'll have to define that type, and then this is going to be a handler like show, but let's define this type here. If we head back over to this file, we also want to type which is type of create, which is the create route like that.

Now, if we pull that in, we should have a fully typed method here. The first thing we want to do here is get access to that validated data. If I do c.rec.valid, we can then pass in specifically the thing that we're looking for, which is JSON data that got sent to our server. So, I'll say valid JSON, and we can call this the incoming task. We'll just call it task like this.

What's nice about this is the code will never make it here unless the task is actually valid. So, unless the user actually specified a name and specified a done column, the code will never make it here. However, if they do, then the code will make it here, and we can actually insert it into our database. What's also nice is if, for whatever reason, extra properties were sent or anything like that, those all get stripped away, and we only get the data that we specifically care about and that we should be inserting into our database.

At this point, we can actually insert it. Let's do db.insert. I need to specify the table, so I'm going to pull in that tasks table, which is defined inside our schema. Then here, we can specify values, and I can just directly pass in task like that. In this case, I actually want to return the inserted thing, so I will say returning like this, and that should give me back the inserted task.

Now, if we look at this, this has all of the properties that we expect, but it is a list because technically, you can insert multiple values. So, I'm actually just going to wrap this in brackets to destructure it to just grab that first inserted value, and then we can respond with it. So, I'll just return c.Json like that, and our type errors go away because we're responding with the thing that we said we would.

If we look back over at our route handler, we said that the response will be a task that was selected, and here we specifically are selecting a task after inserting it. So, we should be good to go now. We still need to hook it up. If you remember, we just have that empty request handler here. Now we can explicitly specify handlers.create, and we can give it a try.

If I refresh my docs over here, head over to test it out, let's try inserting a new one. We'll say the name is learn drizzle and then click Send. We get back the response, and now if we go back over to our list tasks and run this, we should get all of the tasks, including the one that we just created. Awesome!

Now, one other thing we should test is whether the validation is actually working. Let's actually test this out here. What happens if I send a payload that does not have a done property? Does it return an error? It does, and this is actually because of our default hook that we set up earlier. If you can remember that far back, we get a nice Zod error here that says the done property is required.

However, one aspect of this is we should actually document this response because if you look at the documented responses right now, we're only saying that this thing can respond with a 200 status code, but that's not true. If the user or the client sends invalid data, it's going to respond with 422 or unprocess entity. So, we need to handle that inside of our definition. Just to remind you where we actually set this up, whenever we create the app, so in lib...

=> 01:23:33

Always document your API responses accurately; it’s crucial for user clarity and error handling.

Let's test this out and see what happens if I send a payload that does not have a done property. Does it return an error? It does, and this is actually because of our default hook that we set up earlier. If you can remember that far back, we get a nice Zod error here that states the done property is required.

One aspect of this is that we should actually document this response. Currently, if you look at the documented responses, we are only stating that this thing can respond with a 200 status code, but that's not true. If the user or the client sends invalid data, it is going to respond with a 422 Unprocessable Entity. Therefore, we need to handle that inside of our definition.

To remind you where we set this up, whenever we create the app in lib create app, we are passing in this default hook. This default hook runs if there is ever a validation error from an Open API route and responds accordingly. The default hook comes from my library Stoker. If you take a look on npm, you can import the method, and if you look on GitHub, you can view the source code to see what's happening. If you go into the source under Open API and then default hook, you can see that if the data was not valid, we respond with success as false along with the actual Zod error. We are using the unprocessable entity error code, which is the 422 error code. As long as you have that set up, you should be getting this validation response if you send invalid data to the API.

However, we need to document that invalid response. If we head back over to our route definitions, we are stating that yes, this API can respond with a 200 status code, but we can also respond with a 422 status code. So, let's document it. We can say HTTP status codes: Unprocessable Entity, which is 422. If you're unfamiliar with these status codes, you can hover over them, and the documentation includes what they are and what they're used for.

In this case, we want to describe the potential error that we might receive back. It is going to be in JSON format, so we can specify JSON content and then provide a schema. For now, I will give a blank schema just to get this working. Then, we can add a description stating that it will return validation errors.

To document this correctly, I essentially need a schema that describes all of the possible issues that could arise when validating this specific type. I built a helper for this, so if you look at the Stoker Library, there is a schema called create error schema. The idea is that you can pass any Zod schema into it, and it will create examples that correspond to that specific type. Essentially, all I've done under the hood is describe a Zod object that has a success property, an error property (which has an issues property that is an array), etc.

I don't want to define that manually, so if you look in my repo under Open API and then schemas, you will find create error schema. There is a bit of TypeScript and generics involved, but essentially, there’s that type. The cool thing is that I explicitly create an error type by attempting to parse an empty object. I run parse first to give us back an example error and then pass that in. This way, whenever we look in our API docs, we will be able to see a 422 tab and an example of that error. Anyone reading the docs can see that they could potentially get a name is required error or a done is required error.

We are just going to pull in this type from my library, but if you don’t want to use my library, you can copy and paste it to create your own. Right here, all we have to do is say create error schema and then pass in the schema, which is the insert task schema. Now, if we head back to our documentation, we should see that the 422 status code is specified, and we see an example error indicating that it’s possible that name might be missing or that done might be missing.

As I mentioned, basically all I do is run the Zod validator on an empty object or an empty array and get back the response to use as the example. Essentially, whatever your validations might be, it’s just going to be those top-level validations. This is now nicely documented, so when someone comes along and tries to create something without a done property, they will receive this response, and they can already see in the documentation what happens.

One thing to note is that once you start describing multiple response types, you explicitly need to set your status codes inside of your request handlers. If we take a look back at our handlers, we are actually...

=> 01:27:19

Validation is key to preventing empty data from cluttering your database; set minimum and maximum limits to ensure quality input.

In the process of handling errors, it's important to recognize that name might be missing, which results in a response indicating that it is required. Similarly, if the done property is absent, the same applies. Essentially, the approach I take is to run the Zod validator on an empty object or an empty array to retrieve the response, which then serves as the example. Therefore, whatever your validations might be, they will primarily consist of those top-level validations. This method is well-documented, so when someone attempts to create something without a done property, they receive a clear response, and they can refer to the documentation to understand what to expect.

One crucial aspect to consider is that once you start describing multiple response types, you must explicitly set your status codes within your request handlers. For instance, when we examine our handlers, we encounter a type error because the status code can be either 200 or 422, yet we did not pass in a status code. To ensure that all types function correctly, we need to explicitly specify the status code. By pulling in HTTP status codes, I can clearly indicate whether the response is an okay response or a 200 status code. Now, TypeScript is satisfied because I've explicitly defined the response code.

Additionally, I would like to update the validator since I've realized that there is no validation for empty strings or the length of strings. If I were to send a request with the done property, it could lead to empty tasks in our database. To address this, we can modify our validator. By returning to our schema, specifically the insert schema, we can adjust the existing validators. The create insert schema method accepts a second argument, which allows us to specify any column and modify the existing validator. For example, if I want to add validation to the name field, I can pass in a callback that accesses the Zod schema. Here, I can set the minimum length to one, ensuring that the validator does not accept empty values.

This modification can be applied to any field. For instance, if we were dealing with an email field, Zod provides a built-in email validator, allowing us to chain it easily. In this case, I will set the minimum length to one. Furthermore, we could establish a maximum length to prevent our database from becoming overloaded. For example, I might set the maximum length to 500, meaning that the maximum task size allowed is 500 characters. Consequently, our documentation should also reflect these updates. Upon refreshing the docs, we can see that the validations are now listed, indicating that the name must have a minimum length of one and a maximum length of 500. If I attempt to send an empty name, the response will indicate that the string must contain at least one character. Therefore, I must provide a name, such as learn Zod, and set that to false since we still have some learning to do, allowing it to be inserted into the database.

Next, we need to create an endpoint to retrieve a single task by its ID. Currently, when we fetch all tasks, each task has a unique ID. However, we want to enable the retrieval of an individual task by its ID, such as task SL2. To set this up, I will define a new route called get one. I will copy the existing code from the list route, rename it to get one, and then specify the ID in the path. When a request comes in for task SL2, it should return the task with ID 2. Thus, I will add a slash and indicate that we have an ID in the URL.

If we refer to the documentation for Hono Zod Open API, we can find an example illustrating this setup. They demonstrate a route for user SLID and provide a validator for the ID, ensuring that it is a number. If, for instance, you have a UUID in the path, you could add a validator specifically for UUIDs or any other necessary validation. This approach allows us to prevent the handler from executing if an invalid ID is provided. The documentation specifies a validator and outlines the Open API method, indicating that this is a parameter named ID located in the path. By setting up the schema in this manner, it should function correctly.

This is a common task, so I have added it to my Stoker Library. In the schemas section of Stoker, you will find the ID params schema. If you need to validate that an ID is an integer, you can easily pass that in. For further reference, you can check the code in the Open API schemas under ID params.

=> 01:31:14

Validation is key: prevent errors before they happen by ensuring your IDs are valid from the start.

In this discussion, we explore the validation of IDs within an API context. If you have a number or, for instance, a UUID in the path, you could add a validator for UUIDs explicitly or any other necessary validation. In this case, we can prevent the Handler from even running if there is an invalid ID sent in.

To illustrate, they specify a validator, and then there's the Open API method where they chain it, indicating that this is a parameter whose name is ID and it's in the path. As long as you set up your schema like this, it should work seamlessly. This is a common practice, so I added it to my Stoker Library. If you look at Stoker, we have under schemas the ID params schema.

For instance, if you need to validate that an ID is an integer, you can pass that in. If you examine the code, it will be found in Open API schemas under ID params, and it's quite straightforward—it's literally just ID in path. One thing I am doing here is using the built-in coercion, which allows me to use it as a number inside the request handler. The Zod do coercers can take in a string and convert it to a number if it is indeed a number.

Thus, we can use this params schema without having to define it manually every time I need it. Let’s proceed with the implementation. Here, we can say request and then params, and I will bring in the ID params schema defined inside Stoker. In this case, our valid response is just a single task, not an array, but rather a task that got selected from the database. The description will be the requested task.

Now, let's hook this up to see it in our documentation, and afterward, we will implement it. Over here, I need a new endpoint. We will say Open API and then routes.getOne. For now, we will just pass in an empty handler, but we will implement that soon. If I go back to my docs and refresh, we now have an endpoint which is tasks/:id. We can see that the ID has to be a number.

Let’s try it out to ensure that the validation of the ID params works. If I pass in a string, which is obviously not a number, and try to hit this endpoint, we receive a validation error stating expected number received NaN. This is responding with a 422 status code, utilizing the same default hook and responding accordingly.

With this in mind, we should add a response indicating that a validation error could potentially occur. This will be very similar to our create route. If we head back to our route definitions, we will want an unprocessable entity documentation that creates an error schema for the ID params. If the param ID is not a valid number, we will receive an invalid ID error.

Now that we have this, when we look at the API docs, we can see that it might respond with a 422 when the ID is not a number. Next, we will implement it. We will head back to our handlers and explicitly need a handler called getOne. I will copy the list handler and update it accordingly, renaming it to getOne.

Next, we will need a type called getOne route. Back in my route definitions, I will create a type for the getOne route. Now, we will pull that type in, and we can actually use the findFirst method of Drizzle, passing in an await clause. However, we need to get the params first, which we can do using c.rec.valid, just like we did for the body. This will return the params, and if I say c.rec.valid and pass in params, you will see this is an object with an ID property that is a number.

Just like before, the code will never reach this point unless that ID is indeed a number. I will destructure it, so now I have the ID and can query accordingly. With Drizzle, we can pass in an await clause here, and I will check to see that the ID field is equal to the ID that was passed in. This will be transformed into a select from task where ID equals ID limit one since we are only interested in the first one.

We can respond with it, which could either be the task found in the database or undefined, as it may not exist in the database. I will call it task and respond with it here. Just like before, I will specify the status code. Let’s ensure that this works by hooking it up, and then we will handle the not found scenario. Right here, we will say handlers.getOne.

=> 01:35:10

Efficiency is key in coding; reuse your validators to eliminate redundancy and streamline your API responses.

In this section, I will return operators do equal to check if the ID field is equal to the ID that was passed in. This will be transformed into a SQL statement: select from task where ID equals ID limit one. This query retrieves the first task, and we can respond with it. In this case, it could either be the task that exists in the database or it could be undefined, indicating that the task may not be present in the database.

First, I will call this variable task and respond with it. Just like before, I will specify the status code to ensure that this is a valid response. Let's make sure that this works by hooking it up and handling the case of a task not being found. We will say handlers.get one, and now if we test it out by requesting with id2, it should return the task with id2 or id5. However, if we pass in an ID that does not exist, we should receive back undefined.

Currently, it cannot even parse because we are responding with nothing, so we need to handle that. First, we should document it. If we head over to our route documentation, the possible responses for this endpoint are: yes, it's okay (you got back the thing from the database) or no, it's not okay (you passed in an invalid ID or the thing was not found). We will ensure that we document this as well. Here, we will specify that the HTTP status code is not found.

In this case, we can respond with Json content, and we will return an object with a message property. The schema here will be a z.object, and the message will be a string. The description will simply state task not found. We can also document this using open API and provide an example where the message is not found. Now, we have documented a 404 response.

Next, we need to handle this in our handler. After requesting from the database, if the task is not defined, we need to respond with a not found message. If the task is not defined, we will return c.Json, passing in the object. To ensure type completion, I should pass in the status code first, which allows it to validate the type. If I pass in the status code as not found, I can then specify the message as not found, ensuring our type completion is good to go.

Now, we are handling all cases: whether there is an invalid ID in the parameters, whether the task was not found, and responding with the task itself. If we try it out and send a request for something that doesn't exist in the database, it responds with not found. This is a common practice in our APIs, where we respond with a not found message.

I have a helper built into stoker that includes a Zod validator for a message property. Instead of defining it inline, I will define some constants so that any other API endpoints needing to respond with a not found message do not have to redefine it each time. Inside the lib folder, I will create a file called constants. In this file, I will create the not found schema, which will be a create message object pulled in from stoker. I will also pull in the HTTP status phrases. Now, I have a schema specifically that validates a message and has an example for not found. I will ensure that I export this so that any endpoint needing to respond with not found can reuse this validator, thus avoiding redundancy.

In my request handler, I should also use those status phrases. Currently, I am hardcoding not found, but I will replace it with phrases.not found, which is the canonical phrase associated with that status code. This way, I eliminate hardcoded strings. If we check back at the documentation, it should remain the same, but now any other routes defining a not found handler can reuse that schema, significantly reducing duplication.

Lastly, the final endpoint we need to implement is patch. I want to be able to update an existing task by ID, so let's set it up. This endpoint will be a combination of the get one route and the create route because we will be validating both the incoming body and the incoming paths. Therefore, I will copy the create route and paste it, renaming it to patch.

=> 01:39:04

Streamline your code by reusing schemas to reduce duplication and improve validation efficiency.

In this discussion, we found that there is a canonical phrase associated with that status code. I realized that I don't have any hardcoded strings, so right here I can use HTTP status phrases for "not found." If we look back at the documentation, it should be exactly the same. Now, any other routes that define a not found handler can just reuse that schema, which means we are reducing a lot of duplication here. However, I noticed a typo: it should be "ask not found."

The last endpoint that we need to implement is PATCH. I want to be able to update an existing task by ID, so let's set it up now. This one is going to be a combination of the GET one route and the CREATE route because we're validating the incoming body while also validating the incoming paths. We will actually copy the create method, paste it, and call it "patch." The method will be PATCH, but the path will be for a specific ID. For example, "patch task sl5" will allow you to partially update the task with ID 5.

We can copy the other relevant parts from the GET one route. For PATCH, the path will be "tasks/slid." The method is PATCH, not GET, and we want to validate both the parameters as well as the incoming request body. When the request comes in, the first step is to check if it is a valid ID. If it is valid, we will then validate the body. The properties that are being updated include name or done. In this case, we will give it a better name; this is not the task to create, but rather the task updates.

For responses, we will still use the select task schema because we are retrieving it from the database, but the description is different as this is the updated task. Now, we have two possible validation errors: it is possible that the ID is invalid, but it is also possible that the data sent in the body is invalid. For this, we actually need to specify that the response for unprocessable entity can be one of either of those two. You could potentially do this schema or pass in the create error schema for the ID params schema.

Currently, we are getting a type error because my JSON content helper function doesn't accept Zod unions. I will probably update it by the time you see this video so that you can pass in a union. If we specify it like this, let's check it out. If I refresh my reference and then head over to PATCH, we can see the possible issues. If we click on "show schema," you can see that it uses any of, which does pass in those two schemas. We have one schema for validating the body with name and done, and another schema for validating that invalid parameter in the URL.

However, any of isn't what we actually want to use. There is a good article that discusses using one of, any of, or all of inside your OpenAPI schemas. Specifically, they mention that for many code generators, using any of creates potential issues. Ultimately, what we really want is one of; it can be either one of these two things, not a combination of them.

To get this working, I wrote a helper method inside of Stoker called JSON content one of. You can use this helper method and pass in an array of schemas, and the generated docs will include all of the possibilities using one of instead of any of. The code for this is a little tricky because you have to use methods from the underlying libraries like Zod to OpenAPI and types from OpenAPI 3. If you check out the Stoker library, go to OpenAPI, and then look at helpers, specifically at the one of helper, you can see how I implemented it.

Essentially, I had to pull in the OpenAPI generator and generate the specs for each of these schemas, and then I return that. The wrapper for one of is JSON content one of, which uses that helper to get an array of schemas. To use these functions, we need to install separate dependencies because behind the scenes, it will run that generator. You can see in the docs that we have this as a dependency, so I will install it like that.

Now, we can try to use it. Instead of calling JSON content, we will call JSON content one of and pass in an array. This way, instead of using a union with or, we can just pass in an array that uses both of these schemas. Now, the generated docs will use that one of key with these two schemas instead of any of. For most purposes, you might not need this, but if you are using any sort of code generation tool with the output of your spec here, this is definitely something to consider.

=> 01:42:59

To make your API patchable, ensure your schema allows optional properties for updates.

To get an array of schemas, we need to install separate dependencies. This is necessary because, behind the scenes, it's going to have to run that generator. You can see in the docs that it mentions having this as a dependency, so I'm going to install this dependency like that. Now we can try to use it. Instead of calling Json content, we're going to call Json content one of and then we can pass in an array. Rather than using a union with "or," we can simply pass in an array that utilizes both of these schemas. We will do something like this, and now the generated docs will use that one of key with these two schemas instead of the any of.

For most purposes, you might not need this, but if you are using any sort of code generation tool with the output of your spec here, this is definitely something you would potentially want to use. Now that I have that set up, if we head back to our API and refresh, we can take a look. When we look at the schema for 422, we can see that it's using one of.

The one major difference between a patch request and a create request is that all of our properties are optional. Currently, if we look at the insert task schema, both the done and name properties are actually required. To make this patchable, we need a schema where name and done are optional. For that, there's a built-in method in Zod. We can create a schema called the patch tasks schema and use the insert task schema. We will say insert task schema but then we can use the partial method, which is built into Zod. Now, if we look at our schema, it accepts both name and done, but both of those are optional.

Next, let's actually use this schema in our patch method. Whenever we are receiving the body, it will be a patch schema. We'll import that, and then our validation errors should also use the patch schema. With this, we are ready to actually implement that patch route. We will head over to our handlers, copy over the get one, and rename this to patch. We need the type, so this is going to be the type of patch route. Let's define this type like so, and then we will import that type here.

Now, we can attempt to update it. We can grab the incoming body with c.rec.validJson. If we look at it, this is going to be an object with a name or a done property; both of them can be undefined because this is just a patch. You can send just the name or just done. Then, we can run the update query. For this, we will say db.update, passing in the table, which we already have a reference to in this file. Then we will set and can pass in those values. In this case, I can just directly pass in updates, but we need a where clause. We only want to set these updates where the ID is equal to the ID that was in the URL. I'll add a where clause here, and we will need to bring in EQ. EQ is a helper function from drizzle omm. We can pass in the column that we're comparing; in this case, it's tasks.ID, and the value that we're comparing it to, which is the ID we got from the URL.

Lastly, we will do returning, which will return a list of all of the updates. Since it's returning a list, we just want to grab the first one. This will handle our not found scenario for us. If we try to update something in the database and it can't find it by ID, we won't get anything back from the updates. Essentially, if we didn't get anything back, we know that this wasn't found in the database, and we can respond with a not found. If it was updated, we should get back the actual values, and then we can return it.

However, I'm remembering that we are getting a type error here because we actually didn't set up the response for when the status code is not found. We need to ensure that we set that up in our route definition to eliminate that type error. If we head back to our routes, we also need the not found response. I'm just going to copy this over from our get one. Now, this endpoint will either respond with the task that was updated, respond with a 404 not found, or respond with validation errors—either validation errors in the URL or validation errors in the body.

With that, our type errors look good. Let's actually hook it up. If we head back over to task.index, we can pass in that handler. If we say handlers.patch, we should be able to update things. After refreshing the docs, we can see all of the documented potential responses: not found or the potential validation messages. Now, let's test it out. First of all, let's just list the tasks in our API again and figure out which one we're going to update.

=> 01:46:53

Success is all about the details—every response matters, from updates to deletions.

I found the response and I'm just going to copy this over from our previous implementation. Now, this endpoint will either respond with the task that was updated, respond with 404 not found, or respond with validation errors. These validation errors can occur either in the URL or in the body. With that, our type errors look good, so let's actually hook it up.

If we head back over to task.index, we can now pass in that handler. Here, if we say handlers.patch, we should be able to update things. Let's refresh the documentation. Now, if we look at it, we can see all of the documented potential responses, such as not found or the potential validation messages.

Let's test it out. First of all, let's list the tasks in our API again and figure out which one we're going to update. We will update task ID 5. Currently, done is set to false, and we want to set that to true. If we test our patch method and pass in ID 5, then in the body, we set done to true, it should work—and it does! It responds and confirms that yes, it was updated. We can see the updated at column was actually updated.

If we make a request for all tasks, we should see that done is now set to true. Awesome! We can even check our individual route here. If we do a GET request for the task with ID 5 and send it over, done is set to true. Awesome! But let's also test the potential not found situation. Let's try to run a patch for a task with an ID that is not found, and it responds with not found.

Now, the last endpoint we need is for deleting tasks, so let's get that going. If we head over to the route definitions, we need one more definition for delete. For this, I'm just going to copy the patch route because it's going to be fairly similar. I'm going to call this one remove, mainly because delete is a reserved word. The HTTP method will be DELETE, and it will be for a specific ID.

If we receive a DELETE request at some ID, we will attempt to delete it. In this case, we won't accept anything in the body; we're just going to look at the ID parameter. Whatever that ID is, we'll attempt to delete it. For the delete operation, we are actually going to change the success status code to be no content. So, if you delete something, we won't respond with the thing from the database; we'll just say yes, we deleted that, and the body will be completely empty. The status code here is going to be 204, and we will pass in the description.

We can pass an object here, and the description will just be task deleted. We will also have a not found response. If someone sends a delete request for an ID that's not in our database, we will send not found. Additionally, we will potentially have a validation error if the ID parameters are wrong. In this case, we can just go back to JSON content, pass in the ID parameters schema, and then update the description to say invalid ID error.

Now, while I'm thinking about it, let's go ahead and create the type for the remove route. After that, we'll head over to our handlers and implement remove. I'm just going to copy over our patch, rename it to remove, and then bring in the remove route type. In this case, we just need to pull in the ID of the thing that we're deleting. Instead of update, we can say delete and then remove the set.

To know if something is deleted or not, we can actually just look at the result. If we don't return anything, the result is a result set, and we should be able to see the number of rows modified. If that number is zero, then the task was not found. We will say result.rowsAffected is a number, and if that is zero, then that means nothing was deleted, so in a sense, the task was not found. If we did delete it, we just want to respond with nothing.

We can actually do c.body and then pass in null. Here, we can pass in the specific status code. If we do HTTP status codes.noContent, that should be what we need. If it's deleted, we just respond with nothing and a 204 status code. Now, let's hook it up. If we head over here, we can add one more route: routes.remove and then handlers.remove.

Let's refresh our documentation. Now we have our delete task here, and we will give it a try. The examples show 204 no content, so if it's a successful deletion, it responds with nothing. It's possible that it's not found, and it’s also possible that you gave us a bad ID in the URL. First of all, let's just test a bad ID. We get an error—awesome!

Next, let's test deleting something that doesn't exist. We get not found. Finally, let's test deleting a task with ID 5. It gets deleted, and we receive 204 no content. Now, if we list the tasks...

=> 01:51:13

Building a robust API isn't just about functionality; it's about seamless testing integration that keeps your code reliable and maintainable.

Status codes are crucial in API development. For instance, when a resource is deleted, the appropriate response should be a 204 status code, indicating that there is no content to return. If we head over to our routes, we can add one more route by implementing routes.remove and then handlers.remove. After refreshing our documentation, we now have our delete task functionality ready for testing.

To illustrate, when a successful deletion occurs, we expect a 204 no content response. However, there are scenarios where the requested resource might not be found, or a bad ID could be provided in the URL. First, let's test a bad ID, which should return an error. Next, we will attempt to delete a non-existent task, which should yield a not found response. Finally, when we delete a task with ID 5, it gets successfully deleted, and we receive the 204 no content response. After listing the tasks again, we can confirm that the task with ID 5 has been removed.

At this point, we have successfully implemented create, read, update, and delete (CRUD) operations for our list of tasks, and everything is fully documented with robust type safety. Before concluding, I would like to demonstrate how we can potentially test these routes. While we have been conducting developer testing using the documentation, a production-ready API should also include tests within the code.

If you look in the Hono documentation under guides, there is a testing section. A notable feature is that any routers we have created already include a request method built into them. This means we can write tests without needing a separate library, which is often required when testing Express applications, such as using super test. Instead, we can simply use router.request, passing in the URL and options, and then write tests against the response. Additionally, Hono provides a fully typed test helper, allowing us to pass our app into this test client and gain access to a fully typed RPC, similar to the client RPC.

I haven't demonstrated this yet, but Hono supports RPC, meaning if you've defined your endpoints with validators, you can export the type of your app directly and use it on the client side. This test helper essentially provides the same functionality, giving you access to all the methods you have defined in your tests.

To illustrate this, I will write a basic test for the list route. I prefer to place my tests directly next to the files they are testing, as this makes it easier to navigate between the code and the tests. Therefore, I will create a file called tasks.test.ts. To facilitate testing, I will use Vest, which is straightforward to set up. I can install it using npm install --save-dev vest and then add a test script in my package.json file. The script will simply run vest, which will search the entire directory for any file containing the word "test."

Next, I will define our test suite using describe. Specifically, I will describe the list handler, and within this, I will define a unit test using it from Vest. For this test, I will check if it responds with an array. As mentioned, this will be a super basic test; we won't write a full test suite just yet.

To proceed, I will import the router defined in tasks/index and directly call request on it. I will make a request for /list and capture the response. To handle this properly, I will make this an async method and examine the result. For now, I will log the result and create a failing test by expecting false to be true. This should yield a failing test, but it will also allow us to see the result of calling GET against /list.

In essence, you can think of request as similar to fetch, where you pass in a URL and can also specify options, including the method. By default, it will perform a GET request. Let's see what happens when I execute this in my terminal.

=> 01:54:53

Testing isn't just about passing; it's about ensuring every piece works together seamlessly.

Let's actually see what happens with the request for slash list. We'll grab the response and note that we'll need to make this an async method. Then, let's actually try looking at the result here. So, I'm going to await response. I'm just going to use text because we could run into an issue where we don't get back JSON. For now, I'm just going to log out that result, and we're going to write a failing test. So, I'm going to expect false to be true, which means we should get a failing test. However, we should also see what the result is of calling get against slash list.

You can think of request just like fetch; you essentially pass in a URL, but then you can also pass in options where it specifies the method and everything else. By default, it's going to do a GET request. So, let's see what happens in my terminal if I just do npm run test. Vest should find my test file and then run them. But like I was suspicious of earlier, it has to be called test, not tests. So, I'm going to remove the plural there, and now it should detect it and we'll run the test.

Now, regarding the H toad URL, does this file exist? I have a feeling this has to do with our TypeScript aliases. Let me do some searching on the web. I found this Stack Overflow issue where they specify that you have to also specify the alias inside of your v test config. So, it's not exactly as instant as I thought it would be. Let me go ahead and define this. At the root, we're going to create a vest. I'm just going to push it and try to create a TypeScript file. We don't want any plugins; I don't want test globals. Literally, all I want is for the @ to resolve to the source directory.

Let's see if setting up that config does it for us. Yes, okay! So, the tests are actually running. We have a failing test because we expected false to be true, but I'm curious what did we see? This is it, and this is what I was worried about. You can see that when we're logging out the response, it's literally just the text 404 not found. If you remember, we actually set up our app so that even a 404 would respond with a JSON object, but that's not the case right now.

The reason is we are actually accessing that router directly, and this router that we defined doesn't have any of the middlewares that we defined for our overall application. To ensure that everything is behaving in the way that we would expect, we basically need to wrap this in an app that also has all of the middlewares and everything else. If you remember back to the beginning of this video when we created this function, that's why I extracted this function here. I want to do this same thing and then mount my router to create a test client.

From here, I'm just going to export a function called create test app. This function is going to accept a router, and that router is just of type open API hono app bindings. If you remember, we actually did define a reusable type, the app open API type, so we can actually use this type here and say that our router is of that type. When we create a router, it's open API hono with those app bindings. Essentially, what we want to do is create a test app just by calling the create app function that we created earlier, and then we want to mount this router at the root just like we did on our main app.

So, if I say test app.route, we're going to put it on slash and pass in that router there. Now I have my router, but it has all the same middlewares, error handlers, and everything else that it has when it's running in the context of the full application. This is what we can run our tests against. Now, I do need to import this type, so yes, we'll import that from types. From here, I'll return the test app.

Now I can use this function inside of my tests and run the request against it instead. Here, I can say the test router is create test app with my router. We'll import that function from there, and now instead of using the router, we'll use the test router. I would expect that when we log the result here, it should actually be a JSON object because it has all of the not found middlewares and everything else.

So, if we take a look, there it is; it is the not found object. Great! Now, I didn't even do this on purpose; slash list is like the wrong URL. If we really want to test this thing, the actual endpoint we look at our routes is slash tasks. So, right here, if I request slash tasks, that's going to give us back a response. I then know that we're going to get back JSON data, so I can then do this, and this should be an array. I'm just going to log it so we can actually see the data. Now, I'm just going to expect that the result is an array. So, we have the expect type of from vest.

=> 01:58:49

Testing your routes is key to ensuring your app runs smoothly; don't skip it!

Instead of using the router, we'll use the test router. Now, I would expect that when we log the result here, it should actually be a Json object because it has all of the not found middlewares and everything else. If we take a look, there it is; it is the not found object.

Great! Now, I didn't even do this on purpose; slash list is like the wrong URL. So, if we really want to test this thing, the actual endpoint we look at in our routes is slash tasks. Right here, if I request slash tasks, that's going to give us back a response. I then know that we're going to get back Json data, so I can then do this, and this should be an array. I'm just going to log it so we can actually see the data. Now, I'm just going to expect that the result is an array.

We have the expect type of from vest; we'll expect the type of result to be an array. However, we get a type error because right now the thing that we get back is the any type. So, I'm actually just going to add a TS expect error and then I'm going to allow this for the entire file. Typically, when you are writing tests, you kind of have to ignore TypeScript errors for certain scenarios when you're doing your testing. But now, I'm going to ignore the type error and expect it to be an array, and all the tests are passing.

Sweet! This is how you would test essentially each one of your folders that has routes inside of it. You can create a test file, create that test app, and then start poking and prodding at it. Now, we'll test it again, but let's use the built-in test client. So, I'll add another unit test; it responds with an array. Again, I said these are not in-depth tests, but one thing we want to do is now pass this router into the test client.

We can pull in the test client and then pass in that test router like so. Now, if TypeScript is working and I've done everything right, this will have the right type. However, it doesn't, so I've played around with this a lot. We might actually just need to define this in line so that all of our types kind of flow through. Here’s what I'm going to do: I can refactor the create test app functions depending on how this works.

First of all, I'm just going to create an app, so I'll call this the test app without passing in the router. We'll just say create app, right? That’s the function that we defined at the beginning of the video that just gives us a basic Hono app with all of our middlewares. But now, if I say the test router is test app.route and then pass in our router, what I want to see is the actual type. We see the paths, the methods, etc. But on this, I want to see that same type, and I do.

So now, this is what we can pass in there. Ultimately, we could try returning directly test app.route, but that wasn't working for me. You might actually just do this in line, and you could even write a helper function in this file that does something like this. Then we can inline this whole thing: create app and then route at that router, and now we have a fully typed client.

Check this out! Instead of just passing in manual requests, we actually get a typed client that's similar to the RPC client. If we do client.dot, we can see all the stuff that's available. We can say tasks, and then we can see all of the possible things. In this case, we want to run the get request, which is going to list all of our tasks, and I'll call this response. So, we'll await that, and then the Json is going to be awaiting response.Json.

The way this typed client works is if you check the status code, then your Json data will be of a specific type. Right now, for our get all tasks endpoint, it only has one response type, but for the other types, where it might have like 404 not found or unprocessable entity, Json can be several different things, right? Depending on the response. But in this case, we should be able to inspect it directly. I should just be able to expect the type of Json to be an array like that, and we don't have to do TS expect error because it explicitly knows that this is an array of items here.

If we look at our tests, we can see that it has passed. Now, one thing that's happening as well is you're actually seeing all of the request logs, like the debug logs and stuff like that happening. If you don't want to see those when you're running tests, you can actually just update the log level. So, inside of the package.json, here we can set a log level to silent. If we kill this and rerun it, everything breaks.

It looks like whenever I set this up, I didn't include silent in the enum. I'm all over the place, but let's actually fix that. If we go back to our EnV, it is possible for the log level to be silent. So, let's save that, kill this, and restart it. Okay, it's happy, and now it runs my tests without all of the debug logs.

=> 02:02:41

Testing your code in isolation is the key to reliable software development.

The error occurs because it explicitly knows that this is an array of items here. If we look at our tests, we can see that it has passed. Now, one thing that's happening is that you're actually seeing all of the request logs, like the debug logs and stuff like that happening. If you don't want to see those when you're running tests, you can actually just update the log level.

Inside of the package.json, we can set a log level to silent. Then, if we kill this and rerun it, everything breaks. It looks like whenever I set this up, I didn't include silent in the enum. I'm all over the place, but let's actually fix that. If we go back to our EnV, it is possible for the log level to be silent. So let's save that, kill this, and restart it. Now, it's happy, and it runs my tests without all of the logs cluttering up the console.

That's one way to do it. I will say, if you're running this on Windows, you might need to pull in the package called cross EnV before being able to just pass environment variables. Let me just show you that really quick. If you check out cross-v on npm, you'd want to install this package, and then inside of your scripts, use cross EnV. This is especially useful if you have multiple people on your team who are using different operating systems. But for mine, this works perfectly fine.

Now, one more test I'll show you is the possibility of having different types of responses based on the Response Code. Let's create a test that validates the ID parameter. We will create that same test client and then attempt to do tasks at a specific ID. When we call get on that, we get a type error because it actually wants us to pass that parameter in. Here, I need to pass in an object, then I pass in a param, and then I can specify the ID.

If I pass in W, this should respond with that 422 status code: unprocessable entity. First of all, let's write a failing test. I could expect response.status to be 200. I need to make sure that I pull expect in from v test, and now we should have a failing test when we run the tests. Yes, that endpoint responds with 422 because the ID is not a number.

To get a passing test, we can expect this to be 422 like that. Cool! As I said, I'm not going to write all of these tests because we've basically been testing manually, but this is how you can get tests going and achieve really nice type completion. I would especially combine this with TS expect error because I'm going to write another test just to demonstrate this.

Let's say we're sending data to the API. We can say it validates the body when creating. If we attempt to make a request to tasks.post, we will try to create a new task. We'll get a type error because we need to pass in the JSON body. In this case, the way you pass it in is with the Json prop, and then we can specify the properties here. You'll see we're getting type completion, so when I'm creating a new task, I need to pass in the name and done.

Here, I'll set the name to learn vest, but let's say I want to get a 422 status code. If I don't pass in the done property, I should expect this to be 422. However, let's expect it to be 200 just so we get a failing test. But yes, it is 422 because it's actually validating that. Then we get a passing test.

This is a scenario where, if you're using this typed test client, you would just throw this TS expect error right here because you're explicitly testing to make sure that it's validating missing properties. So, this is a scenario where you can ignore the TypeScript errors.

Okay, this video is taking longer to make than I thought. Anyway, I might as well show you how to get your tests up and running. If you take a look at the starter repo that's linked in the description, you can see all of the code for how this is going. I'm just going to point at some of the more relevant parts now.

Essentially, what I did is I pulled out this test client that's fully typed and put it at the top of my test file. Now, in any one of my tests, I can use that fully typed test client. From there, I want my tests to be able to use their own isolated test database. So, I also set up a test EnV file, which essentially just points to a test DB file. Whenever we run our tests, we explicitly load in that EnV.

If you look at our type-safe EnV, we are checking to see if we're in the test environment. If we are in the test environment, we'll load in that test environment variable file; otherwise, we'll load in the normal one. Essentially, whenever these tests are running, they create their own isolated test database and do all of their testing against that database.

=> 02:06:25

Creating a fully typed test client with an isolated test database is the key to ensuring robust and reliable endpoint testing.

In this discussion, I will outline the steps I took to set up a fully typed test client and an isolated test database for my application. Essentially, what I did is I pulled out this test client that's fully typed and placed it at the top of my test file. This allows me to use that fully typed test client in any of my tests.

Next, I wanted my tests to utilize their own isolated test database, so I also set up a test environment (EnV) file. This file essentially points to a test database file. Whenever we run our tests, we explicitly load in that environment. If you look at our type-safe environment, we are checking to see if we're in the test environment. If we are, we load in the test environment variable file; otherwise, we load in the normal one. Thus, whenever these tests run, they create their own isolated test database, conduct all their testing against that database, and then completely remove it once the tests are done.

Before the tests start running, we execute drizzle kit to push the schema to that database. After the tests are completed, we simply delete that database. For all of these tests, we specifically use the status code to ensure good type safety. The way that the Hono client works is that if you have an if statement checking the status code, the type of the return from response.json will correspond to the type specified in your schema for that status code.

In most of these tests, you can see that I am checking the status code and then examining the responses to ensure they are what I expect. I have a full test suite that tests all of the endpoints. When you run npm test, it creates the test database, runs all the tests against it, and then deletes the test database. This serves as a really good starting point if you want to ensure that you have well-tested endpoints. You can use this file as a reference for all of the other route groups that you create.

One last thing I want to show is how we can actually obtain a typed client for our entire application. Up until this point, we have only been testing in the browser. However, what's nice about Hono is that we can infer the types of all defined routes and get a fully typed client that we can use inside our client-side code, such as in React. It works exactly the same way as I demonstrated with the test client, where it documents all of the endpoints and provides type errors for everything you pass in.

To get this set up, we head over to the app. Essentially, we want a type that represents every single router inside of an array. That’s why I’ve defined it in this array. As our application grows, we will create more routers, and by placing them in this array, we can infer the type of everything inside it. The first thing I will do is set this as const, which lets TypeScript know that this array will not change at runtime. This allows us to get the type out of it.

To obtain that type, we can set the type as appType. What we want is a union type of everything inside this array. I will say type of routes at number, which will give me the type of everything at a number, such as 0 or 1. If we look at appType, this now represents all of the possible routes within my application. If I export this, all I need to do is import this type into my client-side code and then pass it into the Hono client. Just like they are passing in the appType here, this is the type I would pass in.

This is one way I found to do it. I prefer this method, but another approach is to define these routes not in an array but by chaining the route method. For example, we can say app.route and pass in the index route, then chain it by saying app.route again and passing in tasks, etc. You would store the return value of this, calling it something like _app, and then get the type of this by saying export appType equal to type of app. This should work in exactly the same way because by chaining these methods, all of the types of each individual route get inferred.

However, I prefer the array method because as you add more routes, the chain will grow longer and longer. It’s quite convenient that if you have everything defined in a single array, you can just iterate through it.

=> 02:09:54

Chaining routes in your API not only simplifies your code but also keeps your types organized and easy to manage.

To begin with, we can chain the route method. Right here, we can say app.route, and then we can pass in the index route. After that, we can chain it again by saying app.route and passing in tasks, etc. This allows us to define any other routes you have and then store the return value of this. Here, I can simply call this like underscore app and then get the type of this. If I say export app type, we'll set this equal to type of app, just like that. This should work in exactly the same way because by chaining these methods, all of the types of each of those individual routes get inferred. We can then use the type here; however, I prefer the array method.

The reason I prefer the array method is that the more routes you have, the longer this chain will become. It’s quite convenient that if you have everything defined in a single array, you can just iterate over it, mount it all, and then use the types like this. Now, to show an example of this, let me create a client file in the lib directory and actually import that type to attempt to create a Hono client. We can import hon client, and then try to create a client; in this case, we're at Port 9999. We will also import that app type from app.

When running this inside client-side code, it’s crucial to ensure that you import the type and not the full thing. If you’re running client code, all of that will error out. Often, you might be doing this inside a mono repo, where you would have one folder for your API and another for your client, allowing you to share the types between them. In this case, I’m just writing some quick client code to demonstrate how it works. Now that we've passed that type in, I can do client. and see all the available options. The index route is just the homepage that returns a JSON object, so we can access that. We can also access tasks and get to all those endpoints just like we did inside the tests. This is great because you can actually do this inside client-side code.

However, I’m going to delete that file because one of the aspects of using Open API is that there are actually generators for every type of language. You can literally take this specification, pass it into those client generators, and they will generate a client SDK. For instance, you could generate an SDK called my tasks API, which will be fully typed and have all the methods, validators, and everything else. They support tons of different languages, so if you’re building your backend with TypeScript but your client isn’t—like maybe it’s a Swift app or an Android app—you could use one of those generators to create an SDK and call your API in a fully typed way as well.

As I mentioned at the beginning of the video, our Hono app is defined inside its own file, and this is completely agnostic of anything specific to any of these runtimes. If you want to deploy to any of these runtimes, essentially what you need to do is take a look at index.ts and adapt this to deploy to the specific runtime you’d like. I encountered some issues when I attempted to deploy this to Vercel, mainly due to the fact that I am using path aliases, which they don’t support unless you are building and bundling your code. I will save that for a future video, so let me know in the comments if that’s something you want to see.

For now, we will reserve deployment for a future video. That’s all the time I have for this video. Thank you so much for watching! Hopefully, you learned something new, even if you’re an existing Hono developer. If you’re brand new to Hono, I hope I provided a good introduction and that you’re inspired to build some awesome APIs. Definitely check out the link in the description; it has a link to the starter repo where everything is documented. You can look through all of this code and also give it a star if you like it. Additionally, check out the package I wrote called Stoker, which basically removes a lot of the boilerplate when building out these APIs with Hono and Hono's Open API.

There’s a ton more I could cover about Hono and Open API. I want to discuss topics like Authentication, tokens, and token expiry, among other things that we didn’t get to talk about here. If there’s anything specific you’d like to know about building APIs with Hono or Open API, let me know in the comments, and I’ll be sure to work on that in a future video. Alright, that’s it for this one. I’ll see you in the next one!