Overview
OneStarter is designed as a monorepo with several standalone components.
├── backend # Backend CF worker service
├── db # Supabase
├── email-templates # Email templates and previews
├── frontend # Frontend react web app
├── infra # IaC terraform infrastructure
└── reverse-proxy # Reverse proxy CF worker service
Developing locally, you will mostly be working with the backend, frontend, and db folders. This document will break down quickly an overview of each and how to get work with them locally.
The philosophy with OneStarter is that, wherever possible, you can run everything locally. This means you can develop, test, and deploy your project without needing to rely on external services. The only exception (currently) is the AI functionality which requires network access to Cloudflare's service.
backend
directory
The backend is written to run on Cloudflare workers using the Hono framework. It is a REST API that handles the following functions out of the box:
- server-side authentication/session management
- email sending
- AI functions
- user-tenant context provision
- administrative actions (impersonation, control pane data, etc.)
- webhook handling (Supabase Auth email sending, payment processing, etc.)
- file storage (S3 compatible)
and can be extended to any additional functionality needed that cannot be provided by the Supabase REST API.
You can run the project locally:
bun run dev
This will start the backend on localhost:8787
, and also boot up an email bridge for local SMTP handling (
see email for more details).
Configuring
Infrastructure
You can configure the backend's infrastructure through the wrangler.toml
file.
Cloudflare's AI functionality requires you log in to an account to be billed for usage. If you do not need Cloudflare's AI functionality at all, you can disable it in the wrangler toml by commenting out the following lines
[ai]
binding = "AI"
Please note: you will also need to ensure you are not using the /api/v1/ai/*
endpoints if you do this, or have written
an alternative.
The wrangler.toml
also contains:
- environment variables
- bindings to other services
- LLMs (Cloudflare AI)
- R2 (S3 compatible storage)
Please review the wrangler documentation for further details.
Environment secrets
Environment secrets are managed in the .dev.vars file. You can copy the .dev.vars.example file to .dev.vars and fill in the values as needed.
When you add additional environment secret, ensure you update the src/types.ts
Bindings
, and your CI/CD as well.
For example, let us add a new environment secret MY_ENV_VAR
. We've updated our .dev.vars, and now need to update the
types and our CI/CD
// src/types.ts
export type Bindings = {
MY_ENV_VAR: string;
// rest ...
}
// this allows you to access it through the Hono context
app.post("/my-endpoint", async (c) => {
return c.json({myEnvVar: c.env.MY_ENV_VAR});
});
In our Github actions files, we would set it here:
# .github/workflows/backend-cf-deploy.yml
- name: Deploy to Cloudflare Workers
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
workingDirectory: backend
environment: ${{ github.ref == 'refs/heads/main' && 'production' || 'staging' }}
command: deploy
secrets: |
# ..other secrets
MY_ENV_VAR
env:
# ...other secrets
MY_ENV_VAR: ${{ github.ref == 'refs/head/main' && secrets.PRODUCTION_MY_ENV_VAR || secrets.STAGING_MY_ENV_VAR }}
db
directory
The db directory contains everything to do with our supabase setup.
You can find the configuration file under db/supabase/config.toml
, and the documentation for
it here.
There are some helper commands locally from the db directory. You can see the by running just
in the terminal.
Running the db locally
just up
This spins up supabase if it is not running. You can restart it by
just restart
If you wish to clear your local database and start fresh, you can do
just reset
Adding a new migration
A new migration can be generated in the db/supabase/migrations
directory by running
just new-migration my_migration_name
Now you can edit the SQL directory in the generated file.
Running migrations locally
You can run the migrations locally by running
just migrate
This will apply the migrations to the local database, generate the frontend types in the frontend/src folder, and export a schema dump for ease of PR review of differences.
email-templates
directory
Email templates are powered with React Email. You can preview the emails locally by running:
bun run dev
This will start the email preview server on http://localhost:3333/
. You can then navigate to the email you wish to
preview.
Emails are localized by default with locales found under src/locales
. If you wish to add a new language, you can do so
by adding a new file in the src/locales
directory, and updating the src/types.ts
and src/root-locales.ts
files.
Adding a new email template requires adding a new folder under the src/emails
folder and a child component called
Email.tsx
. The naming is important as this is what is used to generate the code in the backend.
There is a CI/CD pipeline that picks up on changes and adds the generated code to the backend. You can also manually trigger it locally via:
bun run export
frontend
directory
The frontend is a React web app bootstrapped with Vite. It is designed to be a standalone application that can be deployed to Cloudflare Pages.
For the purposes of local development, the project is ran on port 3000, with a proxy to the backend for requests routed
at /api
. You can run the project via:
bun run dev
The vite configuration sets the local development port and how to access the backend. If you wish to change these
settings, you can do so in the vite.config.ts
file.
Mantine components are used throughout the project. You can find the documentation for Mantine here.
Environment variables
All environment variables for the frontend are by default not treated as secrets. Any variable exposed via the VITE_
prefix will be available in the browser. You can set your environment variables in your .env.local
file. See the
.env.placeholder
for an example.
If you do add secrets that you do not wish to be exposed to the browser, make sure you test thoroughly that they are only used at build time.
Routing
Routing is handled client side via the React Router. You can find the routing configuration
in the src/modules/Router.tsx
file.
Client-side pages are dynamically loaded using loadable.
The <GuardPage>
component can be used to handle page-level access. Granular access control can be managed via the
<Can>
component. These components check for both permissions and feature access based on billing plans.
Layouts are handled as parent elements in a router with an <Outlet/>
element
from React Router.
Adding a New Protected Feature Route Example
// src/modules/Router.tsx
const MyNewPage = loadable(
() => import("@/modules/new-module/MyNewPage"),
);
// as a child of the tenant layout, for example
[
//...
{
id: "newModule",
path: "my-new-page",
handle: {
permissions: ["select/my-new-table"],
features: ["new-module"]
},
element: (
<GuardPage>
<MyNewPage/>
</GuardPage>
),
}
// ...
]
This will check that:
- The user is authenticated
- The user has the
select/my-new-table
permission under a given user-tenant context - The user has the
new-module
feature enabled under a given user-tenant context
If all of these match, then the page will be served.
Querying & State Management
The frontend uses Jotai for state management, combined with Tanstack React Query for data fetching.
These libraries allow for powerful reactive programming patterns with little mental overhead. For example, here we can see how a query can cascade to an atom:
// src/modules/tenants/tenants.queries.ts
export const activeTenantQueryAtom = atomWithQuery((get) => ({
queryKey: ["currentTenant", get(tenantSlugAtom)],
queryFn: async ({queryKey: [, tenantSlug]}) => {
if (!tenantSlug) {
return null;
}
const {data, error} = await supabase
.from("tenants")
.select("*")
.eq("slug", tenantSlug)
.single();
if (error) {
throw error;
}
return data;
},
}));
export const activeTenantAtom = atom((get) => get(activeTenantQueryAtom)?.data);
export const tenantIdAtom = atom((get) => get(activeTenantAtom)?.id);
The tenantIdAtom
can then, for example, be used to just get the tenant id. This can be particularly useful in the
context of tenant-based queries. Here is an example from the in-app notifications
// src/modules/in-app-notifications/in-app-notifications.queries.ts
export const useInAppNotifications = (
pageIndex: number,
showReadNotifications: boolean,
) => {
const tenantId = useAtomValue(tenantIdAtom);
return useQuery({
queryKey: [
"inAppNotifications",
tenantId,
pageIndex,
showReadNotifications,
],
queryFn: () =>
fetchNotifications({
tenantId,
pageIndex,
showReadNotifications,
}),
enabled: !!tenantId,
});
};
export const inAppNotificationsPreviewAtom = atomWithQuery((get) => ({
queryKey: ["inAppNotificationsPreview", get(tenantIdAtom)],
queryFn: async ({queryKey: [, tenantId]}) => {
if (!tenantId) {
return {
data: [],
pagination: {
total: 0,
pageSize: 3,
page: 1,
},
};
}
return await fetchNotifications({
tenantId: tenantId as string,
pageIndex: 0,
pageSize: 3,
showReadNotifications: false,
});
},
}));
When using a query that requires page level arguments, a react hook is preferred over using a Jotai atom. Examples of this would be things like table pagination, search, or filters.
Jotai query atoms are better when it is shared state across pages that can cascade.
Commands
Some useful commands can be found below
# build the frontend
$ bun run build
bunx --bun tsc -b && bunx --bun vite build
# extract the localization keys from the source code
$ bun run i18n:extract
i18next 'src/**/*.{ts,tsx}' [-oc]
# preview the production built project locally
$ bun run preview
bunx --bun vite preview
infra
directory
The infra
directory contains the Terraform configuration for the project.
The terraform configuration sets up:
- Supabase configurations (Staging and Production)
- Empty Cloudflare Pages for your frontend
- Cloudflare WAF rule for backend webhooks route
All terraform state is managed in your preferred cloudflare_r2_bucket_name
bucket.
Due to the nature of Cloudflare's wrangler.toml
offering us most of the infrastructure configuration you need, this is
a pretty basic setup.
If you wish, you can even disable this altogether and manually set up Cloudflare and Supabase.
reverse-proxy
directory
The primary goal of the reverse proxy is to serve your frontend and backend requests under the same domain.
Secondarily, it does basic session injection and security headers. Locally, you do not need it when developing since Vite provides a proxy service from the frontend directory, but you can always test it anyhow.
You can run it locally with:
bun run dev
This will start the reverse proxy on localhost:8788
. If you have the backend and frontend running now, you can see
that all your requests will go through localhost:8788.
As with the backend, the wrangler.toml
file contains the configuration for the reverse proxy.
Ensure that, when naming your services, your BACKEND_SERVICE binding for staging and production match the names of your services. This is how Cloudflare communicates with them.
services = [
{ binding = "BACKEND_SERVICE", service = "<app>-backend-staging" }
]
Out of the box, the reverse proxy has no secrets, all environment variables can be found in the wrangler.toml
file.
Common Concepts
Linting
Linting and formatting is done using biome. Typescript is used extensively. Directories with package.json have the following commands for linting/validation (exception: email templates).
$ bun run check:ci
biome check --changed src/
$ bun run check:all
biome check src/
$ bun run check
biome check --staged src/
$ bun run format:all
biome check --write src/
$ bun run format
biome check --staged --write src/
$ bun run typecheck
bunx --bun tsc --noEmit --pretty
$ bun run validate
bun run check:ci && bun run typecheck
Environment variables vs secrets
In general, environment variables should be managed in your wrangler.toml
file when applicable, and secrets in your
environment secret files. Environment variables are for configuration, and secrets for sensitive data.
Do not commit secrets to your repository. Use Github secrets or other secure methods to manage your secrets.
In some instances, e.g. frontend deployment, where you are not working with a wrangler.toml
file, you can set
environment variables in Github actions environment variables section instead of secrets.