Your first API in Rust with Axum and SQLx

#Rust#Backend#API#Axum#SQLx#PostgreSQL

Rust is a strong fit for backend services: zero-cost abstractions, memory safety without a garbage collector, and a type system that catches bugs at compile time. This post walks through building a small REST API in Rust using Axum for HTTP and SQLx for talking to PostgreSQL. You can swap Axum for Actix-web if you prefer; the patterns are similar.

Prerequisites

  • Rust toolchain (rustup, stable)
  • PostgreSQL running locally or in a container
  • Basic familiarity with Rust (ownership, Result, Option)

Project setup

Create a new binary crate and add the dependencies:

bash
cargo new rust-api --bin
cd rust-api

Add to Cargo.toml:

toml
[dependencies]
axum = { version = "0.7", features = ["json"] }
tokio = { version = "1", features = ["full"] }
sqlx = { version = "0.7", features = ["runtime-tokio", "postgres"] }
serde = { version = "1", features = ["derive"] }
tower-http = { version = "0.5", features = ["cors"] }

Axum gives you routing and extractors. SQLx provides compile-time checked queries and a connection pool. Tokio is the async runtime. Serde is for JSON. Tower-http is for CORS if you call the API from a browser.

Project structure

Keep the API small but structured from the start:

text
src/
├── main.rs
├── config.rs
├── db.rs
├── handlers/
│   └── mod.rs
├── models.rs
└── routes.rs
  • config: load host, port, database URL from env.
  • db: create the SQLx pool and run migrations.
  • handlers: HTTP handlers that return JSON.
  • models: shared structs and Serde (de)serialization.
  • routes: wire paths to handlers.

Configuration

Load settings from the environment so you can change them per environment without recompiling:

rust
// config.rs
use std::env;

pub struct Config {
    pub host: String,
    pub port: u16,
    pub database_url: String,
}

impl Config {
    pub fn from_env() -> Result<Self, env::VarError> {
        Ok(Config {
            host: env::var("HOST").unwrap_or_else(|_| "0.0.0.0".to_string()),
            port: env::var("PORT")
                .unwrap_or_else(|_| "3000".to_string())
                .parse()
                .unwrap_or(3000),
            database_url: env::var("DATABASE_URL")
                .expect("DATABASE_URL must be set"),
        })
    }
}

Require DATABASE_URL; allow defaults for host and port.

Database layer

Create a pool and run migrations. Use one pool per process and clone it into each handler:

rust
// db.rs
use sqlx::postgres::PgPoolOptions;
use sqlx::PgPool;

pub async fn create_pool(database_url: &str) -> Result<PgPool, sqlx::Error> {
    PgPoolOptions::new()
        .max_connections(5)
        .connect(database_url)
        .await
}

Run migrations with SQLx CLI in development:

bash
cargo install sqlx-cli
export DATABASE_URL=postgres://user:pass@localhost/dbname
sqlx migrate add create_items

Then add SQL in migrations/ and run sqlx migrate run before starting the app. The pool is created once in main and passed into the router state.

Models

Define structs that match your tables and derive Serde for JSON:

rust
// models.rs
use serde::{Deserialize, Serialize};

#[derive(Debug, sqlx::FromRow, Serialize)]
pub struct Item {
    pub id: i32,
    pub name: String,
    pub created_at: Option<chrono::DateTime<chrono::Utc>>,
}

#[derive(Debug, Deserialize)]
pub struct CreateItem {
    pub name: String,
}

Use FromRow for mapping rows to Item. Use Deserialize for request bodies. Add chrono to Cargo.toml if you use timestamps.

Handlers

Handlers are async functions that take extractors and return responses. Use State to get the pool and Json for bodies:

rust
// handlers/mod.rs
use axum::{
    extract::{Path, State},
    http::StatusCode,
    Json,
};
use sqlx::PgPool;
use crate::models::{CreateItem, Item};

pub async fn list_items(State(pool): State<PgPool>) -> Result<Json<Vec<Item>>, StatusCode> {
    let items = sqlx::query_as::<_, Item>("SELECT id, name, created_at FROM items ORDER BY id")
        .fetch_all(&pool)
        .await
        .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
    Ok(Json(items))
}

pub async fn get_item(
    State(pool): State<PgPool>,
    Path(id): Path<i32>,
) -> Result<Json<Item>, StatusCode> {
    let item = sqlx::query_as::<_, Item>("SELECT id, name, created_at FROM items WHERE id = $1")
        .bind(id)
        .fetch_optional(&pool)
        .await
        .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
        .ok_or(StatusCode::NOT_FOUND)?;
    Ok(Json(item))
}

pub async fn create_item(
    State(pool): State<PgPool>,
    Json(body): Json<CreateItem>,
) -> Result<(StatusCode, Json<Item>), StatusCode> {
    let item = sqlx::query_as::<_, Item>(
        "INSERT INTO items (name) VALUES ($1) RETURNING id, name, created_at",
    )
    .bind(&body.name)
    .fetch_one(&pool)
    .await
    .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
    Ok((StatusCode::CREATED, Json(item)))
}

Return appropriate status codes. Prefer fetch_optional for single rows and map None to 404.

Routes and app state

Attach handlers to paths and share the pool via State:

rust
// routes.rs
use axum::{
    routing::{get, post},
    Router,
};
use sqlx::PgPool;
use crate::handlers;

pub fn router(pool: PgPool) -> Router {
    Router::new()
        .route("/items", get(handlers::list_items).post(handlers::create_item))
        .route("/items/:id", get(handlers::get_item))
        .with_state(pool)
}

Nest routers with Router::new().merge(other_router) if you split by domain.

Main

Tie config, pool, and router together. Run migrations in production or in a separate step; here we assume they are applied:

rust
// main.rs
mod config;
mod db;
mod handlers;
mod models;
mod routes;

use std::net::SocketAddr;
use tower_http::cors::CorsLayer;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = config::Config::from_env()?;
    let pool = db::create_pool(&config.database_url).await?;

    let app = routes::router(pool).layer(CorsLayer::permissive());

    let addr = SocketAddr::from(([0, 0, 0, 0], config.port));
    axum::serve(
        tokio::net::TcpListener::bind(addr).await?,
        app,
    )
    .await?;

    Ok(())
}

Use a stricter CORS policy in production. Bind to 0.0.0.0 only if you need to accept external connections.

Running the API

Set DATABASE_URL and run:

bash
export DATABASE_URL=postgres://user:password@localhost:5432/rust_api
cargo run

Hit the endpoints:

bash
curl -X POST http://localhost:3000/items -H "Content-Type: application/json" -d '{"name":"First item"}'
curl http://localhost:3000/items
curl http://localhost:3000/items/1

Actix-web alternative

If you prefer Actix-web, the flow is the same: define app state with the pool, register routes, and run the server. Actix uses different extractors and response types, but the idea of shared state, handlers, and a central router is the same. Axum tends to have a smaller API surface and composes well with the Tower ecosystem; Actix has a long history and a large set of examples. Both are production-ready.

Next steps

  • Add validation for request bodies (e.g. with validator crate).
  • Use a proper error type and Result all the way so you can return consistent error JSON.
  • Add logging (tracing) and health checks (e.g. /health that pings the DB).
  • Run SQLx in offline mode in CI so builds do not require a live database.

You now have a minimal but structured Rust API with Axum and SQLx: one pool, clear handlers, and room to grow.