Deploying a Claude Agent SDK Agent to Railway

Package your Slack agent in Docker, configure persistent volumes so sessions survive redeploys, push to GitHub, and get your agent running 24/7 in the cloud with Railway.

5 min read
deploy agent Railway Docker agent SDK persistent volume agent cloud deploy Claude SDK Railway Slack bot

Why local is not enough

Running your agent on a local server means it only works when your laptop is open and your terminal is running. The moment you close the lid, the agent goes offline.

Railway changes that. It is a cloud hosting platform built for server-side applications — lightweight, fast to deploy, and priced to match the low compute requirements of an Agent SDK server. The Slack agent you build with this SDK uses almost no CPU at idle. A free Railway account covers it. Even the hobby plan is more than enough for a team deployment.

> On Railway vs other platforms: If you have used Vercel, think of Railway as Vercel for servers rather than frontend apps. Same simplicity, different use case. Open Claude or OpenCode is also commonly deployed to Railway for the same reasons.

Two prerequisites for cloud deployment

Before writing a single line of deployment code, two things need to be in place:

An Anthropic API key. Your Claude.ai subscription works for local development but not for cloud deployments. Go to console.anthropic.com, create an API key, and keep it ready. Treat it like a password — it controls your API usage and billing.

A GitHub account with your project repo. Railway deploys from GitHub. You will push your code to a repository and Railway will pull from it automatically on every commit.

Adding a Dockerfile

Railway needs to know how to package and run your code. You tell it through a Dockerfile at the root of your project.

Create Dockerfile (capital D) at the root level:

FROM oven/bun:latest

WORKDIR /app

# Install git and adjust user permissions for Agent SDK
RUN apt-get update && apt-get install -y git && \
    groupmod -g 2000 bun && \
    usermod -u 2000 bun

# Copy dependency files and install
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile

# Copy source code
COPY src/ ./src/

# Switch to root for bypass permissions mode in cloud
USER root

# Entry point
CMD ["/bin/sh", "entrypoint.sh"]

The user permission changes are specific to the Agent SDK. The SDK prevents certain commands from running as root by default — a safety measure. These steps create a user configuration that lets bypass permissions mode work correctly in the cloud environment.

Adding an entrypoint script

Create entrypoint.sh at the root:

#!/bin/sh

# Mount persistent volume directories
mkdir -p /data/workspace
mkdir -p /data/sessions

# Start the bot server
bun src/bot.ts

This script runs when the Docker container starts. It creates the directories your persistent volume will use, then launches your server.

Configuring for persistent volumes

Persistent volumes survive between deployments. Without one, every time you push new code, your session history and any files the agent created are deleted.

Two changes to your code are needed before deploying:

In agent.ts, add a cwd option that reads from an environment variable:

const sessionCwd = process.env.SESSION_CWD;

for await (const message of query(messages(), {
  model: "claude-haiku-4-5",
  systemPrompt: SYSTEM_PROMPT,
  permissionMode: "bypassPermissions",
  dangerouslyAllowBypassPermissions: true,
  cwd: sessionCwd,
  ...(sessionId ? { resume: sessionId } : {}),
})) {
  // handle messages
}

In bot.ts, update your session store initialization to use an environment variable path:

const sessionStore = new SessionStore(
  process.env.SESSION_DB_PATH ?? "sessions.db"
);

Setting up your GitHub repository

git init
git add .
git commit -m "Initial commit"
git remote add origin https://github.com/your-username/your-repo.git
git push -u origin main

Before committing, verify your .gitignore includes:

.env
node_modules/
*.db

Never commit your .env file. If your Anthropic API key ends up in a public repository, rotate it immediately.

Deploying on Railway

1. Go to railway.com and create an account if you do not have one. 2. Click New ProjectDeploy from GitHub repo. 3. Select your repository. Railway detects the Dockerfile automatically and starts building. 4. The first build will fail — that is expected. The environment variables are not set yet. 5. Go to Variables in your project and add: - ANTHROPICAPIKEY — your Anthropic key - SLACKBOTTOKEN — your Slack bot token - SLACKAPPTOKEN — your Slack app token - SESSIONDBPATH/data/sessions.db - SESSION_CWD/data/workspace 6. Click Deploy to trigger a rebuild with the variables in place.

Adding the persistent volume

After the deployment succeeds:

1. Click + Add in your Railway project dashboard. 2. Select Volume. 3. Attach it to your service. 4. Set the mount path to /data. 5. Click Deploy again.

The /data directory is now persistent. Session history, files your agent creates, and the SQLite database all survive between deployments.

Verifying the deployment

Open Slack and send a message to your agent. You should see a response. If it fails, click into the deployment in Railway and check the Deploy Logs for error messages. The most common cause is a missing or incorrectly formatted environment variable.

> Production reality check: This setup — Dockerfile, persistent volume, environment variables, GitHub-triggered deploys — is the same infrastructure pattern used for production agent deployments. The agent you just deployed is not a prototype. It runs 24 hours a day, handles requests from wherever you have Slack access, and maintains conversation history across sessions. That is a real deployed agent.

---

Author: FractionalSkill

Keep Going

Ready to Start Building?

Pick the next step that matches where you are right now.

Tutorial
Claude Code Basics

Start with the terminal basics. A hands-on, step-by-step guide to your first 10 minutes with Claude Code.

Start the Tutorial
Guide
AI-Powered Workflows

Automate your client work. Learn how to connect AI tools into workflows that handle repetitive tasks for you.

Read the Guide
Community
Join the Community

Connect with other fractional leaders building with AI. Share workflows, get feedback, and learn from operators who are ahead of you.

Apply to Join