Escaping Myfxbook: Architecting a Custom PostgreSQL Database for Live Trade Tracking

The two-day nightmare of verifying my Forex.com account on Myfxbook was the breaking point. As I detailed in the previous post, the archaic process of opening a pending ‘BUY LIMIT’ order on a live Expert Advisor just to inject a specific “magic number” into the comment string was not just frustrating—it was fundamentally broken. Relying on a third-party black-box platform to track, calculate, and display my own hard-earned alpha is a massive operational vulnerability.

When you hand your trading history over to retail tracking sites, you are entirely at their mercy. If their servers go down during a high-volatility market event, your dashboard goes dark. If they change their API limits, your data ingestion breaks. If they decide to alter how they calculate maximum drawdown, your publicly verified track record is suddenly distorted.

In the high-stakes arena of quantitative trading, data sovereignty is not an optional luxury; it is a mandatory defensive mechanism. Your execution metrics, your latency logs, and your historical trade yields are the most valuable assets your operation produces. They belong in your own database, entirely under your control, hosted on your own infrastructure.

This post marks the beginning of our “Custom Quant Dashboard” series. Over the next three guides, we are going to completely sever our reliance on third-party tracking sites. Today, we start by laying the absolute bedrock: architecting a secure, high-performance PostgreSQL database on our Oracle Cloud ARM server to capture live execution data from our remote Python bots and MT5 terminals.

Section 1: The Core Architecture of a Quant Data Pipeline

Before we write a single line of SQL or spin up a database container, we must define the architecture. The biggest mistake novice developers make is putting their database on the exact same server that executes their trading logic.

If you are running a heavy MetaTrader 5 terminal on a Hyonix Windows VPS, or an asynchronous Binance execution script on a Vultr Linux node, installing a database engine directly alongside them is a recipe for disaster. Databases are notoriously resource-hungry. During a massive market liquidation event where your bot is executing dozens of trades per second, a local database writing to the disk will cause massive I/O bottlenecks, spike the CPU, and potentially stall your trading algorithm.

We must practice Separation of Concerns. The architecture we are building looks like this:

  1. The Execution Layer (The Frontend of Trading): Your Hyonix Windows VPS running MT5, or your Vultr Ubuntu node running Python execution scripts. Their only job is to execute trades and instantly fire off a lightweight web request containing the trade data.
  2. The Data Layer (The Backend): Our high-performance Oracle Cloud ARM instance (4-Core / 24GB RAM). This server sits geographically separated from the execution layer. Its sole purpose is to receive incoming trade data, securely store it, and eventually serve it to our web dashboard.

By isolating the database on the heavy Oracle instance we secured in the previous post, we guarantee that database write-locks or heavy querying will never impact the critical latency of our live trading algorithms.

Section 2: Why PostgreSQL? (And Why Not MySQL or SQLite)

When you decide to build a database, you are immediately faced with a choice of engines. For a WordPress blog, MySQL is the default. For a simple local Python script, SQLite is incredibly convenient. But for a global algorithmic trading operation, PostgreSQL is the undisputed industry standard.

Here is the raw, technical reality of why we are choosing PostgreSQL over the alternatives:

The SQLite Concurrency Death Trap

SQLite stores the entire database in a single file on your disk. It is brilliant for local backtesting. However, SQLite handles concurrency very poorly. If you have a multi-exchange statistical arbitrage bot, or a grid of five different MT5 EAs, there is a high probability that multiple bots will attempt to write a “trade closed” record to the database at the exact same millisecond during a volatility spike. SQLite will lock the database for one process, causing the other scripts to throw database is locked exceptions, potentially crashing your Python execution loops.

The PostgreSQL JSONB Advantage over MySQL

While MySQL is a robust engine, PostgreSQL offers a specific data type that is an absolute game-changer for quantitative developers: JSONB.

Cryptocurrency exchange APIs (like Binance or Bybit) frequently update their JSON response structures. Sometimes they add new fields; sometimes they change the nesting of fee structures. If you are using a rigid SQL table, a changed API response will break your ingestion script. With PostgreSQL’s JSONB format, you can store the entire, raw, unformatted JSON response directly from the exchange into a single column, while still retaining the ability to query it later using SQL. This means you can log exactly what the exchange sent you, ensuring you never lose data due to an unexpected API update.

Furthermore, PostgreSQL is heavily optimized for complex analytical queries and time-series data. When we eventually build our dashboard to calculate the Sharpe ratio, Sortino ratio, and maximum drawdown of our live bots on the fly, PostgreSQL’s advanced mathematical functions will handle the heavy lifting natively.

Section 3: Deploying the Database Engine via aaPanel

In the previous post, I strongly advocated for using aaPanel to manage the massive Oracle Cloud ARM instance. This is where that strategy pays off. Installing a production-grade PostgreSQL server from the command line requires configuring users, setting up initialization scripts, and managing pg_hba.conf files—a tedious process for someone whose primary job is quantitative research, not DevOps.

Through aaPanel, the deployment is streamlined:

  1. Navigate to the aaPanel App Store.
  2. Search for PostgreSQL Manager and click install. (Select version 14 or higher for native JSONB performance).
  3. Once installed, use the visual interface to create a new database. Let’s name it quant_live_ledger.
  4. Generate a highly secure, random 32-character password. Do not use a memorable password. This database will eventually hold your proprietary trading history.

The Security Imperative: Escaping the Default Port

By default, PostgreSQL listens on port 5432. Every malicious botnet scanning the internet knows this. The moment you spin up a PostgreSQL instance, automated scripts from around the world will begin brute-forcing that port trying to gain access.

You must immediately restrict access. Through the aaPanel security tab (and your Oracle Cloud Virtual Cloud Network firewall), you must implement a strict IP Whitelist. The database should drop all connections from the public internet. It should only accept incoming connections from two explicit sources:

  1. Your personal home IP address (for running SQL queries via software like DBeaver or pgAdmin).
  2. The specific, static IP addresses of your execution servers (Your Hyonix VPS and your Vultr node).

If a connection attempt does not originate from these specific IPs, the server firewall must silently drop the packet. Security in quantitative infrastructure must be paranoid by default.

Section 4: Architecting the Database Schema for Quant Trades

A database is only as useful as its structure. If you design your tables poorly, querying your data for performance metrics will become a computationally expensive nightmare.

We need a schema that is flexible enough to handle both the strict, ticket-based system of MetaTrader 5 and the asynchronous, order-ID based systems of crypto exchanges like Binance. Below is the master blueprint for the live_trades table.

SQL

CREATE TABLE live_trades (
    internal_id SERIAL PRIMARY KEY,
    bot_name VARCHAR(50) NOT NULL,
    exchange VARCHAR(50) NOT NULL,
    asset_symbol VARCHAR(20) NOT NULL,
    order_type VARCHAR(10) NOT NULL, -- 'BUY' or 'SELL'
    entry_time TIMESTAMP WITH TIME ZONE NOT NULL,
    exit_time TIMESTAMP WITH TIME ZONE,
    entry_price NUMERIC(18, 8) NOT NULL,
    exit_price NUMERIC(18, 8),
    position_size NUMERIC(18, 8) NOT NULL,
    realized_pnl NUMERIC(18, 2),
    commission NUMERIC(18, 4),
    magic_number BIGINT, -- Specifically for MT5 EAs
    exchange_order_id VARCHAR(100), -- Specifically for Crypto APIs
    raw_api_response JSONB, -- Storing the unadulterated exchange data
    status VARCHAR(20) DEFAULT 'OPEN'
);

Breaking Down the Architecture:

  1. High-Precision Numerics: Notice that we are not using FLOAT or DOUBLE for our pricing columns. Floating-point math introduces microscopic rounding errors, which are unacceptable when dealing with financial ledgers. We use NUMERIC(18, 8) to ensure absolute precision, matching the exact 8-decimal output of Bitcoin pricing.
  2. Time Zone Enforcement: The TIMESTAMP WITH TIME ZONE format is non-negotiable. Your MT5 broker might be operating on Eastern European Time (EET), your Vultr server is in Tokyo (JST), and your database is in London (GMT). If you do not enforce strict UTC timezone logging at the database level, calculating the duration of a trade will become mathematically impossible. Always log everything in absolute UTC.
  3. The Magic Number and Exchange ID: This schema unifies Forex and Crypto. An MT5 EA will populate the magic_number column and leave the exchange_order_id null. A Python Binance bot will populate the exchange_order_id and ignore the magic number.
  4. The Raw JSONB Vault: As discussed, the raw_api_response column is our ultimate fallback. Whatever JSON payload the broker or exchange returns upon order execution is dumped here. If we ever need to audit our slippage, review unexpected API fees, or debug a failed order sequence months later, the exact forensic evidence is permanently preserved here.

Section 5: Indexing for High-Performance Queries

When your database only has 100 trades, any query will execute instantly. But when your high-frequency arbitrage bot logs 10,000 trades a month, and you want your web dashboard to instantly render a 12-month equity curve, your database will begin to choke if it is not indexed properly.

Indexing is like creating a table of contents for your database. Instead of scanning every single row to find the trades executed by your ‘TrendFollower_v2’ bot, the index directs the database engine to the exact disk location instantly.

For our specific use case, we must apply indexing to the columns we will query the most for our future web dashboard:

SQL

CREATE INDEX idx_bot_name ON live_trades (bot_name);
CREATE INDEX idx_exit_time ON live_trades (exit_time);
CREATE INDEX idx_status ON live_trades (status);

By indexing bot_name and exit_time, calculating the daily Profit and Loss (PnL) for a specific algorithm becomes a millisecond operation, regardless of how large the database grows. This is the difference between a sluggish, loading-screen dashboard and a hyper-responsive, professional quant portal.

Conclusion: The Foundation is Set

We have successfully escaped the limitations of shared hosting, and now we have escaped the black-box tracking of third-party verification sites. By architecting an isolated, highly secure PostgreSQL database on our Oracle ARM instance, we have taken absolute ownership of our trading data.

We now have a vault capable of handling thousands of concurrent writes, preserving raw JSON API responses, and serving complex equity curve calculations at lightning speeds. The bedrock is solid.

However, a database is fundamentally passive. It waits for data to arrive. Our remote Python scripts and MT5 terminals need a secure, standardized way to transmit their trade data across the internet and inject it into this PostgreSQL vault without exposing the database directly.

In the next post, we will bridge this gap. We will build a lightweight, ultra-fast backend API using FastAPI in Python. This API will serve as the secure gatekeeper, receiving webhooks from our trading servers, verifying their authenticity, and writing the data safely into our new PostgreSQL ledger. The era of data sovereignty has officially begun.