real-time-trading-infra-img
Blog Post

How I Built a Real-Time Trading Starter Kit with Django, React, and Free Market Data

• 11 min read
django react alpaca quant opensource trading

How I Built a Real-Time Trading Starter Kit with Django, React, and Free Market Data

It started as a small weekend idea:
“What if I could stream real market data locally without paying a dime?”

That was the whole spark. No business plan. No big architecture diagram.
Just curiosity — and a healthy frustration with tutorials that call themselves “real-time trading apps” but don’t even have a WebSocket.

I wanted to build something that felt real, not flashy.
Something that would make me understand how market data actually moves,
and maybe even teach me how far a single laptop can go before you truly need a cloud.


The itch

Every developer hits this point: you want to explore live data,
but you don’t want to build a full hedge-fund pipeline just to get a chart moving.

When I looked around, most “examples” fell into two extremes:
either 5,000-line quant backtesting engines that needed AWS credits and Kafka,
or cute front-ends that faked the price updates with setInterval.

I didn’t want either.
I wanted a system I could reason about.

That’s how I found Alpaca.
Their free API lets you:

  • open one WebSocket connection,
  • subscribe to up to 30 live instruments, and
  • pull 5 years of historical 1-minute candles.

That’s enough to build something meaningful.

So I decided:
I’ll make a stack that connects to Alpaca, fetches historical data,
streams live ticks, and displays real-time charts — the right way.
And it should run locally, end-to-end, on one command.


Choosing the stack

I didn’t want to overthink the tech choices.
I already knew my way around Django, so that became the backbone.
But I also wanted it to feel modern — asynchronous, reactive, dev-friendly.

Here’s what I picked and why:

  • Django + DRF + Channels — a comfortable base that can talk HTTP and WebSocket in the same project. Channels gives you async without breaking Django’s soul.
  • Celery + Redis — because backfills and candle computations don’t belong in request-response land.
  • Postgres — durable, boring, reliable. The best kind of database.
  • React + Vite — quick reload, instant feedback.
  • NX — to keep the frontend and backend under one roof, so I don’t lose my mind switching folders.
  • Docker Compose — because I didn’t want “works on my machine” to ever cross my mind again.

The goal was simple:

I should be able to run one command — npm run dev — and have everything come alive.


The first “it works”

Getting Alpaca connected took an evening.
Once I saw those tick messages flowing through my terminal, I felt that stupid grin every engineer gets when a socket connection stays alive longer than 5 seconds.

The first prototype was crude.
Django received the ticks, stored them in memory, and broadcast them to the frontend.
There was no persistence, no backfill, no candle logic.
But I had a moving price.
That’s the moment any idea becomes addictive — when it moves.

So I built a small React watchlist page, just to see it in the browser.
When the first candle updated in real time, I realized:
okay, this is going to consume my next few weeks.


The first big mess (a.k.a. the history problem)

The first real challenge hit when I added a new symbol mid-session.
Alpaca’s WebSocket gave me ticks for it instantly,
but I had no history for that symbol in my database.

My chart looked like someone started drawing a line halfway across the screen.
The solution sounded simple:

“When you add a new instrument, just fetch its history from Alpaca.”

Easy, right?
Until I learned that Alpaca only lets you fetch 10,000 bars per request.
Five years of 1-minute bars is about 1.8 million candles.

That’s 180 requests per instrument.

So I built a backfill job that fetched them chunk by chunk, newest first.
And that’s when I discovered the next problem:
live ticks and historical inserts don’t mix well.


When real-time becomes too real

While backfill was writing historical candles,
the WebSocket stream was still receiving live ticks for the same symbol.
Both were trying to update the “latest candle.”

One was old, one was current.
Together, they produced chaos.

Candles overlapped.
Open/close times went out of order.
Sometimes, the same candle had three different closes.

It was ugly.
And it taught me the most valuable lesson in this project:
real-time systems break in invisible ways.

The fix came from a simple realization:

You can’t stop the stream, but you can stop writing to it.


Redis locks, or how I finally slept at night

I added a Redis lock per symbol.
When a backfill starts for, say, AAPL,
the WebSocket streamer sets lock:AAPL = 1.

While the lock is active:

  • live ticks are still received but buffered in a Redis list,
  • historical data is fetched and written to Postgres,
  • higher timeframes (5m, 15m, 1H, 1D) are derived from the 1m base.

Once the backfill finishes,
the lock is released, and all buffered ticks are replayed sequentially.

For the first time, my data stayed consistent — even when I added new instruments mid-market.

I spent a good ten minutes just watching the candles roll,
appreciating how satisfying it felt to see order in what used to be chaos.


Wiring the world together

By this point I had too many moving parts.
Starting everything manually was ridiculous:

  • Django API
  • Celery worker
  • Celery beat
  • Redis
  • Postgres
  • WebSocket streamer
  • Flower
  • React dev server

So I dockerized everything.
Each service got its own container, with shared .env config.
Then NX handled the orchestration: npm run dev now meant
“start the Docker world and the frontend together.”

That single command became my favorite part.
Every time I ran it, the whole system spun up in under a minute.
Watching all containers go “healthy” in sequence felt like running a private exchange.


The watchlist that made it feel human

After all that backend plumbing, the UI needed some life.

I built a basic login/register page using Django auth,
so watchlists could be user-scoped.

The main dashboard showed your watchlists; each had tiles for instruments.
Click an instrument, and it opened a chart showing 1-minute candles updating in real time.
From there, you could switch timeframes or explore historical data.

It wasn’t a trading app — it was more like a developer’s playground.
Something between a Bloomberg Terminal and a system monitor.

Sometimes I’d just leave it running in the corner of my screen.
Watching the candles tick forward became its own weird kind of meditation.


The architecture that finally made sense

After a few rewrites and naming disasters, the final flow looked like this

image

The frontend stayed on my host machine for fast HMR;
everything else ran inside Docker for consistency.

Flower lived on port 5555 so I could see all tasks in motion —
backfills, candle derivations, stream health checks.

It looked simple on paper, but getting here took weeks of small, annoying fixes:
container networking issues, timezone bugs, database indexes that weren’t being used.
Real-time systems aren’t hard because of algorithms — they’re hard because every small delay ripples outward.


The quiet satisfaction of “it just works”

The day everything finally clicked, I ran npm run dev, opened the dashboard, added ten instruments, and walked away.
Two hours later I came back. Everything was still streaming, no errors, no gaps.

That moment — the absence of failure — felt like success.

There’s a weird calm in watching something you built run by itself,
doing exactly what it’s supposed to do.
No “exception in thread asyncio loop.”
No “database locked.”
Just quiet logs rolling by:


[AAPL] candle updated: 2025-10-14 15:58:00
[TSLA] candle updated: 2025-10-14 15:58:00

I think that’s what most engineers chase: not glory, not views,
just a system that hums along and doesn’t need you anymore.


The small design choices that mattered more than I expected

A few little things ended up defining how pleasant the project feels to work on:

1. Historical first, live second

Every symbol always loads its history before subscribing to live data.
It makes the chart feel instant and prevents “blank screen” syndrome.

2. Explicit ports

Every service has its own fixed port (5173, 8000, 8001, 5555, etc.).
When something breaks, you immediately know which process to blame.

3. Flower as a first-class citizen

I open Flower before I open Chrome now.
Seeing your workers, queues, and retries is oddly comforting.

4. Readable logging

Color-coded prefixes for every container.
It’s small, but when you’re juggling 7 services, clarity is a superpower.


What I learned about “real-time”

I went in thinking this would be a stack problem:
“React meets Django meets WebSocket.”
It turned out to be a temporal problem.

Everything that can go wrong in a real-time system happens because something arrives too early, too late, or out of order.

Locks, queues, backfills, buffers — they’re all ways of telling time who’s in charge.

Redis taught me more about time than any programming course I’ve taken.


What surprised me the most

1. You don’t need Kafka to feel like a quant

If you design your pipeline cleanly, Redis + Celery already cover 90% of what you’d use Kafka for in a small system.

2. Frontend state is harder than backend math

Keeping the chart consistent while new data arrives was trickier than streaming ticks.
The smallest latency in WebSocket updates could desync the UI.
I ended up debouncing updates and redrawing the current candle every 3 seconds to stay stable.

3. 5 years of 1-minute bars is huge

Alpaca’s limit forced me to think in batches.
Processing that much data locally reminded me that even “free” APIs have costs — time, memory, patience.

4. “Working” doesn’t mean “done”

There’s always one more bug hiding in the edges of async systems.
You just stop seeing it until the next race condition shows up.


The road ahead

I’m not done.
Projects like this don’t really finish — they just evolve until they feel like home.

Here’s what’s next on my list:

  • A small indicator service: when a new candle closes, compute moving averages and cache them server-side.
  • A backfill reconciliation command that runs after market close to check for gaps.
  • A lightweight auth veneer so I can host a public demo without leaking keys.
  • A chart overlay panel that reads from the API, not the client.
  • Maybe even a notebook layer, for experimenting with strategies on stored data.

But even if none of that happens soon, the core works — and that alone feels good.


Why this project matters to me

I’ve built a lot of things, but this one changed how I think about engineering.
It forced me to slow down and see the invisible parts —
the moments between requests, the time between ticks,
the patience between “it’s broken” and “I understand why.”

It’s easy to talk about scaling, performance, or design patterns.
What’s harder is to build something you can trust to tell the truth.

That’s what Alpaca-Main became for me —
a truthful little ecosystem.
When a candle moves, it’s because the market moved, not because I faked it.
That kind of honesty in software feels rare now.


The philosophical bit I didn’t expect to write

Real-time systems are just compressed life.
You set up boundaries, make promises about order,
try to keep chaos buffered until you’re ready to deal with it.
And if you design well, the chaos becomes predictable enough to watch calmly.

That’s how building Alpaca-Main felt —
like learning how to stay calm while the world keeps updating.


If you want to try it

You don’t need anything fancy.

npm install
npm run dev

Visit http://localhost:5173
Create an account, make a watchlist, add AAPL, and just watch the candles move.

No accounts, no SaaS dashboards, no fake data.
Everything you see is coming straight from Alpaca’s feed,
handled by Django, stored in Postgres, aggregated by Celery,
and rendered live in React.

That’s it. That’s the entire loop.


What I’d tell someone building their own

Start small.
Build one path end-to-end before you generalize.
Make the data flow visible.
Add logging you actually like reading.
Expect your first few attempts at “real-time” to be half real, half wishful thinking.

And don’t underestimate Redis — it’s smarter than it looks.


The end (for now)

It’s funny how a side project meant to test an API
ended up teaching me more about distributed systems than my grad courses did.

Alpaca-Main isn’t perfect.
It’s not production-ready.
But it’s real, every part of it exists for a reason.

If you’re a developer who enjoys seeing how data actually moves,
clone it, run it, break it.
And maybe, somewhere between the locks and the logs,
you’ll find the same quiet satisfaction I did.


Last updated: October 21, 2025
Repo: github.com/naveedkhan1998/alpaca-main