App Server Tasks
Deploy PathMX sites to Cloudflare at *.path.app. See V2 plan for architecture and design.
Code: apps/server/
1. Package boundary cleanup
- add
@pathmx/core/sharedwith worker-safeRouter, types,ChangeEvent - add
@pathmx/server/sharedwithPathMXServer,PathMXStorage,resolveRequest - split
resolveRequestout ofutils.tsintoresolve.ts(worker-safe) - fix
Routerextension parsing (localgetExtensionhelper, no Nodepath) - export
ChangeEventcleanly for Worker use
2. Worker read path
- create
apps/serverproject withpackage.json - add
wrangler.tomlwith R2 + SpaceDO bindings - configure TypeScript (
tsconfig.json+wrangler types) - implement
R2Storage(apps/server/src/storage.ts) - scaffold Worker entry with subdomain parsing (
apps/server/src/worker.ts) - scaffold
SpaceDOstub (apps/server/src/space-do.ts) - scaffold
CloudflareServer(apps/server/src/server.ts) - add
SPACE_KVbinding towrangler.toml+wrangler types - implement
SpaceDOwith SQLite storage + KV write-through - wire
handleServein Worker (readsSpaceStatefrom KV, not the DO) - serve one manually seeded site end to end
SpaceDO SQLite storage
SpaceDO uses the Durable Object's built-in SQLite via this.ctx.storage.sql. Define the schema in the constructor and expose methods that match the SpaceDOApi interface from the plan.
The state types used in the DO:
type SpaceState = {
deployId: string
roots: BuildRootsJSON
updatedAt: string
}
type DeployRecord = {
deployId: string
roots: BuildRootsJSON
createdAt: string
meta?: { commit?: string; branch?: string }
}
In apps/server/src/space-do.ts:
import { DurableObject } from "cloudflare:workers"
import type { BuildRootsJSON, ChangeEvent } from "@pathmx/core/shared"
type SpaceState = {
deployId: string
roots: BuildRootsJSON
updatedAt: string
}
type DeployRecord = {
deployId: string
roots: BuildRootsJSON
createdAt: string
isActive: boolean
meta?: { commit?: string; branch?: string }
}
export class SpaceDO extends DurableObject<Env> {
private initialized = false
private subdomain = ""
private ensureSchema() {
if (this.initialized) return
this.ctx.storage.sql.exec(`
CREATE TABLE IF NOT EXISTS deploys (
deploy_id TEXT PRIMARY KEY,
roots_json TEXT NOT NULL,
meta_json TEXT,
is_active INTEGER NOT NULL DEFAULT 0,
created_at TEXT NOT NULL
)
`)
this.initialized = true
}
async getActiveState(): Promise<SpaceState | null> {
this.ensureSchema()
const row = this.ctx.storage.sql
.exec(
"SELECT deploy_id, roots_json, created_at FROM deploys WHERE is_active = 1",
)
.one()
if (!row) return null
return {
deployId: row.deploy_id as string,
roots: JSON.parse(row.roots_json as string),
updatedAt: row.created_at as string,
}
}
private async writeToKV(state: SpaceState) {
await this.env.SPACE_KV.put(
`space:${this.subdomain}`,
JSON.stringify(state),
)
}
private activate(deployId: string) {
this.ctx.storage.sql.exec(
"UPDATE deploys SET is_active = 0 WHERE is_active = 1",
)
this.ctx.storage.sql.exec(
"UPDATE deploys SET is_active = 1 WHERE deploy_id = ?",
deployId,
)
}
async commitPublish(input: {
subdomain: string
roots: BuildRootsJSON
meta?: DeployRecord["meta"]
events: ChangeEvent[]
}): Promise<SpaceState> {
this.subdomain = input.subdomain
this.ensureSchema()
const deployId = crypto.randomUUID()
const now = new Date().toISOString()
const rootsJson = JSON.stringify(input.roots)
const metaJson = input.meta ? JSON.stringify(input.meta) : null
this.ctx.storage.sql.exec(
"INSERT INTO deploys (deploy_id, roots_json, meta_json, created_at) VALUES (?, ?, ?, ?)",
deployId,
rootsJson,
metaJson,
now,
)
this.activate(deployId)
const state: SpaceState = { deployId, roots: input.roots, updatedAt: now }
await this.writeToKV(state)
this.broadcast(input.events)
return state
}
async listDeploys(): Promise<DeployRecord[]> {
this.ensureSchema()
const rows = [
...this.ctx.storage.sql.exec(
"SELECT deploy_id, roots_json, meta_json, is_active, created_at FROM deploys ORDER BY created_at DESC",
),
]
return rows.map((row) => ({
deployId: row.deploy_id as string,
roots: JSON.parse(row.roots_json as string),
createdAt: row.created_at as string,
isActive: row.is_active === 1,
meta: row.meta_json ? JSON.parse(row.meta_json as string) : undefined,
}))
}
async rollbackDeploy(input: {
deployId: string
events: ChangeEvent[]
}): Promise<SpaceState> {
this.ensureSchema()
const row = this.ctx.storage.sql
.exec(
"SELECT roots_json FROM deploys WHERE deploy_id = ?",
input.deployId,
)
.one()
if (!row) throw new Error(`Deploy ${input.deployId} not found`)
this.activate(input.deployId)
const state: SpaceState = {
deployId: input.deployId,
roots: JSON.parse(row.roots_json as string),
updatedAt: new Date().toISOString(),
}
await this.writeToKV(state)
this.broadcast(input.events)
return state
}
// WebSocket (covered in phase 4)
private broadcast(events: ChangeEvent[]) {
for (const ws of this.ctx.getWebSockets()) {
for (const event of events) {
ws.send(JSON.stringify(event))
}
}
}
}
Wire handleServe in the Worker
The read path should be pure edge -- no DO RPC. The Worker reads SpaceState from KV (SPACE_KV) and uses it to build or retrieve a cached PathMXServer. The DO is only involved in write paths (publish, rollback) and WebSocket connections.
Add a KV namespace to wrangler.toml:
[[kv_namespaces]]
binding = "SPACE_KV"
id = "..." # wrangler kv namespace create SPACE_KV
Then wrangler types to regenerate Env.
In apps/server/src/worker.ts:
import { PathMXServer } from "@pathmx/server/shared"
import { R2Storage } from "./storage"
type SpaceState = {
deployId: string
roots: import("@pathmx/core/shared").BuildRootsJSON
updatedAt: string
}
// per-isolate cache: keyed by deployId
const serverCache = new Map<string, PathMXServer>()
async function handleServe(
request: Request,
env: Env,
subdomain: string,
): Promise<Response> {
const raw = await env.SPACE_KV.get(`space:${subdomain}`)
if (!raw) return new Response("Space not found", { status: 404 })
const state: SpaceState = JSON.parse(raw)
let server = serverCache.get(state.deployId)
if (!server) {
const storage = new R2Storage(env.BUCKET)
server = new PathMXServer({ storage, roots: state.roots })
await server.setup()
serverCache.set(state.deployId, server)
}
return server.fetch(request)
}
How this stays fast:
- KV read -- edge-cached globally. After first read at a PoP, subsequent reads are served from edge cache (~1ms).
- In-memory
serverCache-- keyed bydeployId. Once built, subsequent requests in the same isolate skip KV entirely (only content blob fetch from R2 remains). - R2 -- content blobs are immutable and benefit from Cloudflare's built-in edge caching.
- No DO on the read path -- DOs are single-region. The DO only participates in publish/rollback (write path) and WebSocket connections.
Consistency: KV is eventually consistent (~60s global propagation). After a publish, far-away PoPs may serve the old deploy briefly. This is fine for a deploy workflow. Connected WebSocket clients get instant updates via the DO.
Manual seeding test
Before building publish, verify the read path works by seeding data manually:
# build a site locally
pmx build path/to/site -o .pathmx
# upload blobs to R2 (each file by its hash from manifest.json)
wrangler r2 object put pathmx-storage/<hash> --file .pathmx/<root-slug>/<path> --local
# seed SpaceDO via a quick test script or curl to a temporary /api/seed endpoint
# that calls stub.commitPublish({ roots: <roots.json contents>, events: [] })
# then visit http://localhost:8787 with wrangler dev
wrangler dev
3. Publish path
- implement
POST /api/publish/begin(returns missing hashes + signed R2 upload URLs) - implement
POST /api/publish/commit(forward toSpaceDO.commitPublish) - validate auth (API key via env var)
- add
pmx publishCLI command
Publish API in the Worker
The publish flow uses signed R2 URLs so the CLI uploads blobs directly to R2, bypassing the Worker. This avoids Worker body size limits and reduces latency.
Add a PUBLISH_SECRET env var in wrangler.toml:
[vars]
PUBLISH_SECRET = "dev-secret"
Run wrangler types to regenerate Env after adding it.
In apps/server/src/publish.ts:
function unauthorized(): Response {
return new Response("Unauthorized", { status: 401 })
}
function checkAuth(request: Request, env: Env): boolean {
const token = request.headers.get("Authorization")?.replace("Bearer ", "")
return token === env.PUBLISH_SECRET
}
export async function handlePublishBegin(
request: Request,
env: Env,
): Promise<Response> {
if (!checkAuth(request, env)) return unauthorized()
const body = await request.json<{ subdomain: string; hashes: string[] }>()
// check which hashes are missing from R2 in parallel
const results = await Promise.all(
body.hashes.map(async (hash) => ({
hash,
exists: (await env.BUCKET.head(hash)) !== null,
})),
)
const missing = results.filter((r) => !r.exists)
// generate signed upload URLs for each missing hash
const uploads = await Promise.all(
missing.map(async ({ hash }) => ({
hash,
uploadUrl: await env.BUCKET.createMultipartUpload(hash),
// For simple single-part uploads, use presigned URLs instead:
// This requires the R2 bucket to have a public custom domain or
// using the S3-compatible API with presigned URLs.
})),
)
// Simpler approach: generate short-lived signed URLs via the S3-compatible API.
// R2 exposes an S3-compatible endpoint at:
// https://<account-id>.r2.cloudflarestorage.com/<bucket-name>
// Use the aws4-compatible signing to create presigned PUT URLs.
//
// For now, use a simpler token-based direct upload via a thin Worker endpoint
// that streams the body to R2. This avoids needing S3 credentials on the client.
// We revisit presigned URLs when upload volume warrants it.
const uploadsWithUrls = await Promise.all(
missing.map(async ({ hash }) => ({
hash,
uploadUrl: `/api/publish/upload?hash=${hash}`,
})),
)
return Response.json({ missing: uploadsWithUrls })
}
export async function handlePublishUpload(
request: Request,
env: Env,
): Promise<Response> {
if (!checkAuth(request, env)) return unauthorized()
const hash = new URL(request.url).searchParams.get("hash")
if (!hash) {
return new Response("Missing hash param", { status: 400 })
}
await env.BUCKET.put(hash, request.body)
return new Response("OK", { status: 200 })
}
export async function handlePublishCommit(
request: Request,
env: Env,
): Promise<Response> {
if (!checkAuth(request, env)) return unauthorized()
const body = await request.json<{
subdomain: string
roots: import("@pathmx/core/shared").BuildRootsJSON
meta?: { commit?: string; branch?: string }
events: import("@pathmx/core/shared").ChangeEvent[]
}>()
const doId = env.SPACE_DO.idFromName(body.subdomain)
const stub = env.SPACE_DO.get(doId)
const state = await stub.commitPublish({
subdomain: body.subdomain,
roots: body.roots,
meta: body.meta,
events: body.events,
})
return Response.json({
url: `https://${body.subdomain}.path.app`,
deployId: state.deployId,
})
}
Wire into the Worker's API routing in worker.ts:
import {
handlePublishBegin,
handlePublishUpload,
handlePublishCommit,
} from "./publish"
// inside the fetch handler, in the /api branch:
if (url.pathname === "/api/publish/begin" && request.method === "POST") {
return handlePublishBegin(request, env)
}
if (url.pathname === "/api/publish/upload" && request.method === "POST") {
return handlePublishUpload(request, env)
}
if (url.pathname === "/api/publish/commit" && request.method === "POST") {
return handlePublishCommit(request, env)
}
CLI pmx publish command
In packages/cli/src/commands/publish.ts. Follows the same registerXCommand pattern as build.ts.
The CLI reads directly from the build output directory -- no Bundler step needed. The build already produces roots.json, manifest.json, and all content-addressed artifacts.
import type { Command } from "commander"
import { ensureBuild, resolveRootPath } from "./utils"
import path from "path"
type PublishOptions = {
outDir: string
subdomain: string
server: string
}
async function publishCommand(rootPath: string, options: PublishOptions) {
const { outDir, subdomain, server } = options
const token = process.env.PATHMX_PUBLISH_TOKEN
if (!token) {
console.error("PATHMX_PUBLISH_TOKEN env var is required")
process.exit(1)
}
// 1. build
console.log(`[pathmx] building ${rootPath}`)
const build = await ensureBuild(rootPath, {
outDir,
clean: false,
perf: false,
})
// 2. read roots + manifest from build output
const buildDir = build.rootOutDir
const roots = build.roots.toJSON()
const manifest = build.manifest.toJSON()
// 3. collect all content hashes
const allHashes: string[] = Object.values(manifest.files).map((f) => f.hash)
// include routes + manifest blob hashes from roots
for (const root of Object.values(roots.roots)) {
allHashes.push(root.routes, root.manifest)
}
// 4. POST /api/publish/begin -- get missing hashes + upload URLs
console.log(`[pathmx] checking ${allHashes.length} hashes`)
const beginRes = await fetch(`${server}/api/publish/begin`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({ subdomain, hashes: allHashes }),
})
const { missing } = await beginRes.json<{
missing: { hash: string; uploadUrl: string }[]
}>()
console.log(
`[pathmx] uploading ${missing.length} new blobs (${allHashes.length - missing.length} cached)`,
)
// 5. upload missing blobs
// build a hash -> file path lookup from manifest
const hashToPath = new Map<string, string>()
for (const [filePath, entry] of Object.entries(manifest.files)) {
if (!hashToPath.has(entry.hash)) {
hashToPath.set(entry.hash, path.join(buildDir, filePath))
}
}
for (const { hash, uploadUrl } of missing) {
const filePath = hashToPath.get(hash)
if (!filePath) {
console.warn(`[pathmx] no file found for hash ${hash}, skipping`)
continue
}
const blob = Bun.file(filePath)
const url = uploadUrl.startsWith("http")
? uploadUrl
: `${server}${uploadUrl}`
await fetch(url, {
method: "POST",
headers: { Authorization: `Bearer ${token}` },
body: blob,
})
}
// 6. POST /api/publish/commit
const commitRes = await fetch(`${server}/api/publish/commit`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
subdomain,
roots,
events: [
{ type: "artifacts-changed", paths: Object.keys(manifest.files) },
],
}),
})
const result = await commitRes.json<{ url: string; deployId: string }>()
console.log(`[pathmx] published: ${result.url} (deploy: ${result.deployId})`)
}
export function registerPublishCommand(program: Command) {
program
.command("publish [rootPath]")
.requiredOption("-s, --subdomain <name>", "Target subdomain (e.g. my-site)")
.option("-o, --outdir <dir>", "Output directory", ".pathmx")
.option("--server <url>", "Server URL", "https://pathmx.path.app")
.description("Publish the project to path.app")
.action((rootPath, options) => {
rootPath = resolveRootPath(rootPath)
return publishCommand(rootPath, {
outDir: options.outdir,
subdomain: options.subdomain,
server: options.server,
})
})
}
Register in packages/cli/src/pmx.ts:
import { registerPublishCommand } from "./commands/publish"
// ...
registerPublishCommand(program)
4. Live updates
- implement
/_eventsWebSocket upgrade inSpaceDO - route WebSocket connections to
SpaceDO(subdomain) - broadcast events on publish/rollback
- verify open clients update correctly after publish
WebSocket handling in SpaceDO
Uses the Durable Object WebSocket Hibernation API so sockets survive DO hibernation.
Add to SpaceDO in space-do.ts:
// handle incoming fetch for /_events
async fetch(request: Request): Promise<Response> {
const url = new URL(request.url)
if (url.pathname === "/_events") {
const pair = new WebSocketPair()
this.ctx.acceptWebSocket(pair[1])
return new Response(null, { status: 101, webSocket: pair[0] })
}
return new Response("Not found", { status: 404 })
}
// hibernation callbacks
async webSocketMessage(ws: WebSocket, message: string) {
// no client->server messages needed yet
}
async webSocketClose(ws: WebSocket) {
// cleanup handled automatically
}
The broadcast method (already shown in phase 2) iterates this.ctx.getWebSockets() and sends each event as JSON. This is already called from commitPublish and rollbackDeploy.
The /_events routing is already scaffolded in worker.ts -- it forwards the request to stub.fetch(request) which hits the fetch method above.
ChangeEvents are computed at publish time. A simple first pass: emit a full artifacts-changed event with all paths. This can be refined later to diff old vs new manifests.
5. History and rollback
- implement
GET /api/deploys - implement
POST /api/publish/rollback - add
pmx publish --rollback - verify rollback updates open clients correctly
Deploy list endpoint
In publish.ts (or a new deploys.ts):
export async function handleListDeploys(
request: Request,
env: Env,
): Promise<Response> {
const subdomain = new URL(request.url).searchParams.get("subdomain")
if (!subdomain) {
return new Response("Missing subdomain param", { status: 400 })
}
const doId = env.SPACE_DO.idFromName(subdomain)
const stub = env.SPACE_DO.get(doId)
const deploys = await stub.listDeploys()
return Response.json(deploys)
}
Wire in the Worker:
if (url.pathname === "/api/deploys" && request.method === "GET") {
return handleListDeploys(request, env)
}
Rollback endpoint
export async function handlePublishRollback(
request: Request,
env: Env,
): Promise<Response> {
if (!checkAuth(request, env)) return unauthorized()
const body = await request.json<{
subdomain: string
deployId: string
events: ChangeEvent[]
}>()
const doId = env.SPACE_DO.idFromName(body.subdomain)
const stub = env.SPACE_DO.get(doId)
const state = await stub.rollbackDeploy({
deployId: body.deployId,
events: body.events,
})
return Response.json({
url: `https://${body.subdomain}.path.app`,
deployId: state.deployId,
})
}
SpaceDO.rollbackDeploy() is already defined in the phase 2 SpaceDO code above.
CLI --rollback flag
Add to the publish command in packages/cli/src/commands/publish.ts:
program
.command("publish [rootPath]")
.requiredOption("-s, --subdomain <name>", "Target subdomain")
.option("--rollback <deployId>", "Rollback to a previous deploy")
// ...existing options...
.action((rootPath, options) => {
if (options.rollback) {
return rollbackCommand(
options.subdomain,
options.rollback,
options.server,
)
}
// ...normal publish...
})
async function rollbackCommand(
subdomain: string,
deployId: string,
server: string,
) {
const token = process.env.PATHMX_PUBLISH_TOKEN
if (!token) {
console.error("PATHMX_PUBLISH_TOKEN required")
process.exit(1)
}
const res = await fetch(`${server}/api/publish/rollback`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
subdomain,
deployId,
events: [{ type: "artifacts-changed", paths: [] }],
}),
})
const result = await res.json<{ url: string; deployId: string }>()
console.log(`[pathmx] rolled back to ${result.deployId}: ${result.url}`)
}
6. Production hardening
- apply final cache headers for stable routes vs hashed assets
- test repeated publish on the same route without stale HTML
- test rollback on the same route without stale HTML
- test multiple subdomains
- add request and publish logging
- enable wildcard routing after staged validation
Cache headers
In resolveRequest (or the Worker's response layer), set headers based on content type:
- HTML pages (stable URLs like
/lecture-1):Cache-Control: public, max-age=0, must-revalidate+ETag: "<hash>" - Hashed assets (CSS/JS/images with hash in manifest):
Cache-Control: public, max-age=31536000, immutable
The resolveRequest function already accepts a cacheControl param. In the Worker, determine which to use based on content type or whether the URL matches a manifest file with a hash in the path.
Wildcard routing
When ready for production, update wrangler.toml:
[[routes]]
pattern = "*.path.app"
zone_name = "path.app"
Validation
- one site serves from R2 through the Worker
- a second site serves from the same Worker with a different
SpaceDO - publish uploads only missing blobs
- publish creates a new deploy record
- publish activates the new deploy
- publish emits WebSocket events to the correct site only
- open clients update correctly after publish
- rollback activates an older deploy without rebuild or upload
- rollback emits WebSocket events to the correct site only
- stable routes do not serve stale HTML after publish or rollback