How I Built a Real-Time Trading Starter Kit with Django, React, and Free Market Data
It started as a small weekend idea:
âWhat if I could stream real market data locally without paying a dime?â
That was the whole spark. No business plan. No big architecture diagram.
Just curiosity â and a healthy frustration with tutorials that call themselves âreal-time trading appsâ but donât even have a WebSocket.
I wanted to build something that felt real, not flashy.
Something that would make me understand how market data actually moves,
and maybe even teach me how far a single laptop can go before you truly need a cloud.
The itch
Every developer hits this point: you want to explore live data,
but you donât want to build a full hedge-fund pipeline just to get a chart moving.
When I looked around, most âexamplesâ fell into two extremes:
either 5,000-line quant backtesting engines that needed AWS credits and Kafka,
or cute front-ends that faked the price updates with setInterval.
I didnât want either.
I wanted a system I could reason about.
Thatâs how I found Alpaca.
Their free API lets you:
- open one WebSocket connection,
- subscribe to up to 30 live instruments, and
- pull 5 years of historical 1-minute candles.
Thatâs enough to build something meaningful.
So I decided:
Iâll make a stack that connects to Alpaca, fetches historical data,
streams live ticks, and displays real-time charts â the right way.
And it should run locally, end-to-end, on one command.
Choosing the stack
I didnât want to overthink the tech choices.
I already knew my way around Django, so that became the backbone.
But I also wanted it to feel modern â asynchronous, reactive, dev-friendly.
Hereâs what I picked and why:
- Django + DRF + Channels â a comfortable base that can talk HTTP and WebSocket in the same project. Channels gives you async without breaking Djangoâs soul.
- Celery + Redis â because backfills and candle computations donât belong in request-response land.
- Postgres â durable, boring, reliable. The best kind of database.
- React + Vite â quick reload, instant feedback.
- NX â to keep the frontend and backend under one roof, so I donât lose my mind switching folders.
- Docker Compose â because I didnât want âworks on my machineâ to ever cross my mind again.
The goal was simple:
I should be able to run one command â
npm run devâ and have everything come alive.
The first âit worksâ
Getting Alpaca connected took an evening.
Once I saw those tick messages flowing through my terminal, I felt that stupid grin every engineer gets when a socket connection stays alive longer than 5 seconds.
The first prototype was crude.
Django received the ticks, stored them in memory, and broadcast them to the frontend.
There was no persistence, no backfill, no candle logic.
But I had a moving price.
Thatâs the moment any idea becomes addictive â when it moves.
So I built a small React watchlist page, just to see it in the browser.
When the first candle updated in real time, I realized:
okay, this is going to consume my next few weeks.
The first big mess (a.k.a. the history problem)
The first real challenge hit when I added a new symbol mid-session.
Alpacaâs WebSocket gave me ticks for it instantly,
but I had no history for that symbol in my database.
My chart looked like someone started drawing a line halfway across the screen.
The solution sounded simple:
âWhen you add a new instrument, just fetch its history from Alpaca.â
Easy, right?
Until I learned that Alpaca only lets you fetch 10,000 bars per request.
Five years of 1-minute bars is about 1.8 million candles.
Thatâs 180 requests per instrument.
So I built a backfill job that fetched them chunk by chunk, newest first.
And thatâs when I discovered the next problem:
live ticks and historical inserts donât mix well.
When real-time becomes too real
While backfill was writing historical candles,
the WebSocket stream was still receiving live ticks for the same symbol.
Both were trying to update the âlatest candle.â
One was old, one was current.
Together, they produced chaos.
Candles overlapped.
Open/close times went out of order.
Sometimes, the same candle had three different closes.
It was ugly.
And it taught me the most valuable lesson in this project:
real-time systems break in invisible ways.
The fix came from a simple realization:
You canât stop the stream, but you can stop writing to it.
Redis locks, or how I finally slept at night
I added a Redis lock per symbol.
When a backfill starts for, say, AAPL,
the WebSocket streamer sets lock:AAPL = 1.
While the lock is active:
- live ticks are still received but buffered in a Redis list,
- historical data is fetched and written to Postgres,
- higher timeframes (5m, 15m, 1H, 1D) are derived from the 1m base.
Once the backfill finishes,
the lock is released, and all buffered ticks are replayed sequentially.
For the first time, my data stayed consistent â even when I added new instruments mid-market.
I spent a good ten minutes just watching the candles roll,
appreciating how satisfying it felt to see order in what used to be chaos.
Wiring the world together
By this point I had too many moving parts.
Starting everything manually was ridiculous:
- Django API
- Celery worker
- Celery beat
- Redis
- Postgres
- WebSocket streamer
- Flower
- React dev server
So I dockerized everything.
Each service got its own container, with shared .env config.
Then NX handled the orchestration: npm run dev now meant
âstart the Docker world and the frontend together.â
That single command became my favorite part.
Every time I ran it, the whole system spun up in under a minute.
Watching all containers go âhealthyâ in sequence felt like running a private exchange.
The watchlist that made it feel human
After all that backend plumbing, the UI needed some life.
I built a basic login/register page using Django auth,
so watchlists could be user-scoped.
The main dashboard showed your watchlists; each had tiles for instruments.
Click an instrument, and it opened a chart showing 1-minute candles updating in real time.
From there, you could switch timeframes or explore historical data.
It wasnât a trading app â it was more like a developerâs playground.
Something between a Bloomberg Terminal and a system monitor.
Sometimes Iâd just leave it running in the corner of my screen.
Watching the candles tick forward became its own weird kind of meditation.
The architecture that finally made sense
After a few rewrites and naming disasters, the final flow looked like this

The frontend stayed on my host machine for fast HMR;
everything else ran inside Docker for consistency.
Flower lived on port 5555 so I could see all tasks in motion â
backfills, candle derivations, stream health checks.
It looked simple on paper, but getting here took weeks of small, annoying fixes:
container networking issues, timezone bugs, database indexes that werenât being used.
Real-time systems arenât hard because of algorithms â theyâre hard because every small delay ripples outward.
The quiet satisfaction of âit just worksâ
The day everything finally clicked, I ran npm run dev, opened the dashboard, added ten instruments, and walked away.
Two hours later I came back. Everything was still streaming, no errors, no gaps.
That moment â the absence of failure â felt like success.
Thereâs a weird calm in watching something you built run by itself,
doing exactly what itâs supposed to do.
No âexception in thread asyncio loop.â
No âdatabase locked.â
Just quiet logs rolling by:
[AAPL] candle updated: 2025-10-14 15:58:00
[TSLA] candle updated: 2025-10-14 15:58:00
I think thatâs what most engineers chase: not glory, not views,
just a system that hums along and doesnât need you anymore.
The small design choices that mattered more than I expected
A few little things ended up defining how pleasant the project feels to work on:
1. Historical first, live second
Every symbol always loads its history before subscribing to live data.
It makes the chart feel instant and prevents âblank screenâ syndrome.
2. Explicit ports
Every service has its own fixed port (5173, 8000, 8001, 5555, etc.).
When something breaks, you immediately know which process to blame.
3. Flower as a first-class citizen
I open Flower before I open Chrome now.
Seeing your workers, queues, and retries is oddly comforting.
4. Readable logging
Color-coded prefixes for every container.
Itâs small, but when youâre juggling 7 services, clarity is a superpower.
What I learned about âreal-timeâ
I went in thinking this would be a stack problem:
âReact meets Django meets WebSocket.â
It turned out to be a temporal problem.
Everything that can go wrong in a real-time system happens because something arrives too early, too late, or out of order.
Locks, queues, backfills, buffers â theyâre all ways of telling time whoâs in charge.
Redis taught me more about time than any programming course Iâve taken.
What surprised me the most
1. You donât need Kafka to feel like a quant
If you design your pipeline cleanly, Redis + Celery already cover 90% of what youâd use Kafka for in a small system.
2. Frontend state is harder than backend math
Keeping the chart consistent while new data arrives was trickier than streaming ticks.
The smallest latency in WebSocket updates could desync the UI.
I ended up debouncing updates and redrawing the current candle every 3 seconds to stay stable.
3. 5 years of 1-minute bars is huge
Alpacaâs limit forced me to think in batches.
Processing that much data locally reminded me that even âfreeâ APIs have costs â time, memory, patience.
4. âWorkingâ doesnât mean âdoneâ
Thereâs always one more bug hiding in the edges of async systems.
You just stop seeing it until the next race condition shows up.
The road ahead
Iâm not done.
Projects like this donât really finish â they just evolve until they feel like home.
Hereâs whatâs next on my list:
- A small indicator service: when a new candle closes, compute moving averages and cache them server-side.
- A backfill reconciliation command that runs after market close to check for gaps.
- A lightweight auth veneer so I can host a public demo without leaking keys.
- A chart overlay panel that reads from the API, not the client.
- Maybe even a notebook layer, for experimenting with strategies on stored data.
But even if none of that happens soon, the core works â and that alone feels good.
Why this project matters to me
Iâve built a lot of things, but this one changed how I think about engineering.
It forced me to slow down and see the invisible parts â
the moments between requests, the time between ticks,
the patience between âitâs brokenâ and âI understand why.â
Itâs easy to talk about scaling, performance, or design patterns.
Whatâs harder is to build something you can trust to tell the truth.
Thatâs what Alpaca-Main became for me â
a truthful little ecosystem.
When a candle moves, itâs because the market moved, not because I faked it.
That kind of honesty in software feels rare now.
The philosophical bit I didnât expect to write
Real-time systems are just compressed life.
You set up boundaries, make promises about order,
try to keep chaos buffered until youâre ready to deal with it.
And if you design well, the chaos becomes predictable enough to watch calmly.
Thatâs how building Alpaca-Main felt â
like learning how to stay calm while the world keeps updating.
If you want to try it
You donât need anything fancy.
- Clone the repo: github.com/naveedkhan1998/alpaca-main
- Put your Alpaca keys in
.env.example - Run:
npm install
npm run dev
Visit http://localhost:5173
Create an account, make a watchlist, add AAPL, and just watch the candles move.
No accounts, no SaaS dashboards, no fake data.
Everything you see is coming straight from Alpacaâs feed,
handled by Django, stored in Postgres, aggregated by Celery,
and rendered live in React.
Thatâs it. Thatâs the entire loop.
What Iâd tell someone building their own
Start small.
Build one path end-to-end before you generalize.
Make the data flow visible.
Add logging you actually like reading.
Expect your first few attempts at âreal-timeâ to be half real, half wishful thinking.
And donât underestimate Redis â itâs smarter than it looks.
The end (for now)
Itâs funny how a side project meant to test an API
ended up teaching me more about distributed systems than my grad courses did.
Alpaca-Main isnât perfect.
Itâs not production-ready.
But itâs real, every part of it exists for a reason.
If youâre a developer who enjoys seeing how data actually moves,
clone it, run it, break it.
And maybe, somewhere between the locks and the logs,
youâll find the same quiet satisfaction I did.
Last updated: October 21, 2025
Repo: github.com/naveedkhan1998/alpaca-main