- Published on
I wrote a QuestDB REST API Python Client, CLI and REPL Shell
- Authors
- Name
- Teddy Xinyuan Chen
QuestDB comes with a very nice web console, but there's no CLI, so I wrote one (can't live without the terminal!).
The REST API is very well defined: https://questdb.com/docs/reference/api/rest/, only 3 documented endpoints. One undocumented endpoints I also implemented are /chk
to check for if a table exists, I found the route when trying to ingest CSV via the web console.
Table of Contents
Demo (6 min)
Watch me execute ~20 commands against my production instance, from csv importing / exporting, one-off sql queries (multiple queries supported in one go via sqlparser
's preprocessing, which isn't demo'ed here), to questdb-shell: https://www.youtube.com/watch?v=l_1HBbAHeBM
Screenshots
Full screen: https://gg.teddysc.me/?g=16651cccc351b1d2742a4bddaee1c62d&b
CLI Features
questdb-cli --help
usage: questdb-cli [-h] [-H HOST] [--port PORT] [-u USER] [-p PASSWORD]
[--timeout TIMEOUT] [--scheme {http,https}] [-i | -D] [-R]
[--config CONFIG] [--stop-on-error | --no-stop-on-error]
{imp,exec,exp,chk,schema,gen-config} ...
QuestDB REST API Command Line Interface.
Logs to stderr, outputs data to stdout.
Uses QuestDB REST API via questdb_rest library.
positional arguments:
{imp,exec,exp,chk,schema,gen-config}
Available sub-commands
imp Import data from file(s) using /imp.
exec Execute SQL statement(s) using /exec (returns JSON).
Reads SQL from --query, --file, --get-query-from-python-module, or stdin.
exp Export data using /exp (returns CSV to stdout or file).
chk Check if a table exists using /chk (returns JSON). Exit code 0 if exists, 3 if not.
schema Fetch CREATE TABLE statement(s) for one or more tables.
gen-config Generate a default config file at ~/.questdb-rest/config.json
options:
-h, --help Show this help message and exit.
-H HOST, --host HOST QuestDB server host.
--port PORT QuestDB REST API port.
-u USER, --user USER Username for basic authentication.
-p PASSWORD, --password PASSWORD
Password for basic authentication. If -u is given but -p is not, will prompt securely unless password is in config.
--timeout TIMEOUT Request timeout in seconds.
--scheme {http,https}
Connection scheme (http or https).
-i, --info Use warning level logging (default is WARNING).
-D, --debug Enable debug level logging to stderr.
-R, --dry-run Simulate API calls without sending them. Logs intended actions.
--config CONFIG Path to a specific config JSON file (overrides default ~/.questdb-rest/config.json).
--stop-on-error, --no-stop-on-error
Stop execution immediately if any item (file/statement/table) fails.
The subcommands provides full support for the REST API spec, for example, the imp
subcommand:
questdb-cli imp --help
usage: questdb-cli imp [-h] [-n NAME] [--name-func {stem,add_prefix}] [--name-func-prefix NAME_FUNC_PREFIX] [--schema-file SCHEMA_FILE | -s SCHEMA] [-P {NONE,YEAR,MONTH,DAY,HOUR,WEEK}] [-t TIMESTAMP] [-o | --overwrite | --no-overwrite]
[-a {skipCol,skipRow,abort}] [-d DELIMITER] [-F | --forceHeader | --no-forceHeader] [-S | --skipLev | --no-skipLev] [--fmt {tabular,json}] [-O O3MAXLAG] [-M MAXUNCOMMITTEDROWS] [-c | --create | --no-create]
files [files ...]
positional arguments:
files Path(s) to the data file(s) to import.
options:
-h, --help Show this help message and exit.
-n NAME, --name NAME Explicit table name. Overrides --name-func. Applied to ALL files. (default: None)
--name-func {stem,add_prefix}
Function to generate table name from filename (ignored if --name set). Available: stem, add_prefix (default: None)
--name-func-prefix NAME_FUNC_PREFIX
Prefix string for 'add_prefix' name function. (default: )
--schema-file SCHEMA_FILE
Path to JSON schema file. Applied to ALL files. (default: None)
-s SCHEMA, --schema SCHEMA
JSON schema string. Applied to ALL files. Use quotes. (default: None)
-P {NONE,YEAR,MONTH,DAY,HOUR,WEEK}, --partitionBy {NONE,YEAR,MONTH,DAY,HOUR,WEEK}
Partitioning strategy (if table created). (default: None)
-t TIMESTAMP, --timestamp TIMESTAMP
Designated timestamp column name (if table created). (default: None)
-o, --overwrite, --no-overwrite
Overwrite existing table data/structure. (default: None)
-a {skipCol,skipRow,abort}, --atomicity {skipCol,skipRow,abort}
Behavior on data errors during import. (default: skipCol)
-d DELIMITER, --delimiter DELIMITER
Specify CSV delimiter character. (default: None)
-F, --forceHeader, --no-forceHeader
Force treating the first line as a header. (default: None)
-S, --skipLev, --no-skipLev
Skip Line Extra Values. (default: None)
--fmt {tabular,json} Format for the response message to stdout. (default: tabular)
-O O3MAXLAG, --o3MaxLag O3MAXLAG
Set O3 max lag (microseconds, if table created). (default: None)
-M MAXUNCOMMITTEDROWS, --maxUncommittedRows MAXUNCOMMITTEDROWS
Set max uncommitted rows (if table created). (default: None)
-c, --create, --no-create
Automatically create table if it does not exist. (default: True)
QuestDB-shell with autocompletion
questdb-shell is just a simple bash script plugging questdb-cli exec --psql --query
into rlwrap
with carefully selected flags.
The autocompletion keyword list is provided by https://github.com/questdb/sql-grammar.
The SQL in the thumbnail of the demo
WITH LatestComponents AS (
-- First, get the latest record for each fund_ticker from components
SELECT
fund_ticker,
as_of_date
FROM
components LATEST ON as_of_date PARTITION BY fund_ticker
)
-- Now, join the latest components with the ETF list to get the fund name
SELECT
lc.fund_ticker AS "fund",
etf.fund_name AS "fund_name", -- Added fund_name from etf_scraper_etf_list
to_str(lc.as_of_date, 'yyyy-MM-dd EE') AS "last_scraped_on"
FROM
LatestComponents lc -- Use the CTE result
LEFT JOIN -- Use LEFT JOIN in case an ETF is in components but not in the list yet
etf_scraper_etf_list etf ON lc.fund_ticker = etf.ticker -- Join condition
ORDER BY
CASE
WHEN lc.fund_ticker IN ('SPY', 'QQQ', 'DIA', 'XLF') THEN 1
ELSE 0
END DESC,
lc.as_of_date -- Order by the latest date
LATEST ON
is a time series DB feature that comes in handy. :)
Schema of the 2 tables involved:
CREATE TABLE 'components' (
name VARCHAR,
ticker SYMBOL CAPACITY 1024 NOCACHE,
weight DOUBLE,
fund_ticker SYMBOL CAPACITY 32 NOCACHE,
as_of_date TIMESTAMP,
cusip VARCHAR,
sedol VARCHAR,
sector VARCHAR,
amount DOUBLE,
market_currency VARCHAR,
market_value DOUBLE,
security_type VARCHAR
) timestamp(as_of_date) PARTITION BY DAY WAL
WITH maxUncommittedRows=500000, o3MaxLag=600000000us
DEDUP UPSERT KEYS(ticker,fund_ticker,as_of_date);
CREATE TABLE 'etf_scraper_etf_list' (
fund_name VARCHAR,
inception_date VARCHAR,
ticker VARCHAR,
cusip VARCHAR,
isin VARCHAR,
asset_class VARCHAR,
subasset_class VARCHAR,
country VARCHAR,
region VARCHAR,
product_url VARCHAR,
product_id VARCHAR,
net_assets DOUBLE,
fund_type VARCHAR,
provider VARCHAR,
benchmark VARCHAR,
exchange VARCHAR
)
WITH maxUncommittedRows=500000, o3MaxLag=600000000us;