
Get started with TimescaleDB in under 10 minutes. This guide will help you run TimescaleDB locally, create your first hypertable with columnstore enabled, write data to the columnstore, and see instant analytical query performance.
psql client (included with PostgreSQL) or any PostgreSQL client like pgAdminYou have two options to start TimescaleDB:
The easiest way to get started:
Important: This script is intended for local development and testing only. Do not use it for production deployments. For production-ready installation options, see the TimescaleDB installation guide.
Linux/Mac:
curl -sL https://tsdb.co/start-local | sh
This command:
Alternatively, you can run TimescaleDB directly with Docker:
docker run -d --name timescaledb \ -p 6543:5432 \ -e POSTGRES_PASSWORD=password \ timescale/timescaledb-ha:pg18
Note: We use port 6543 (mapped to container port 5432) to avoid conflicts if you have other PostgreSQL instances running on the standard port 5432.
Wait about 1-2 minutes for TimescaleDB to download & initialize.
Connect using psql:
psql -h localhost -p 6543 -U postgres# When prompted, enter password: password
You should see the PostgreSQL prompt. Verify TimescaleDB is installed:
SELECT extname, extversion FROM pg_extension WHERE extname = 'timescaledb';
Expected output:
extname | extversion-------------+------------ timescaledb | 2.x.x
Prefer a GUI? If you'd rather use a graphical tool instead of the command line, you can download pgAdmin and connect to TimescaleDB using the same connection details (host: localhost, port: 6543, user: postgres, password: password).
Let's create a hypertable for IoT sensor data with columnstore enabled:
-- Create a hypertable with automatic columnstoreCREATE TABLE sensor_data ( time TIMESTAMPTZ NOT NULL, sensor_id TEXT NOT NULL, temperature DOUBLE PRECISION, humidity DOUBLE PRECISION, pressure DOUBLE PRECISION) WITH ( tsdb.hypertable);-- create indexCREATE INDEX idx_sensor_id_time ON sensor_data(sensor_id, time DESC);
tsdb.hypertable - Converts this into a TimescaleDB hypertable
See more:
Let's add some sample sensor readings:
-- Enable timing to see time to execute queries\timing on -- Insert sample data for multiple sensors-- SET timescaledb.enable_direct_compress_insert = on to insert data directly to the columnstore (columnnar format for performance)SET timescaledb.enable_direct_compress_insert = on;INSERT INTO sensor_data (time, sensor_id, temperature, humidity, pressure)SELECT time, 'sensor_' || ((random() * 9)::int + 1), 20 + (random() * 15), 40 + (random() * 30), 1000 + (random() * 50)FROM generate_series( NOW() - INTERVAL '90 days', NOW(), INTERVAL '1 seconds') AS time; -- Once data is inserted into the columnstore we optimize the order and structure -- this compacts and orders the data in the chunks for optimal query performance and compressionDO $$DECLARE ch TEXT;BEGIN FOR ch IN SELECT show_chunks('sensor_data') LOOP CALL convert_to_columnstore(ch, recompress := true); END LOOP;END $$;
This generates ~7,776,001 readings across 10 sensors over the past 90 days.
Verify the data was inserted:
SELECT COUNT(*) FROM sensor_data;
Now let's run some analytical queries that showcase TimescaleDB's performance:
-- Enable query timing to see performance\timing on -- Query 1: Average readings per sensor over the last 7 daysSELECT sensor_id, COUNT(*) as readings, ROUND(AVG(temperature)::numeric, 2) as avg_temp, ROUND(AVG(humidity)::numeric, 2) as avg_humidity, ROUND(AVG(pressure)::numeric, 2) as avg_pressureFROM sensor_dataWHERE time > NOW() - INTERVAL '7 days'GROUP BY sensor_idORDER BY sensor_id; -- Query 2: Hourly averages using time_bucket -- Time buckets enable you to aggregate data in hypertables by time interval and calculate summary values.SELECT time_bucket('1 hour', time) AS hour, sensor_id, ROUND(AVG(temperature)::numeric, 2) as avg_temp, ROUND(AVG(humidity)::numeric, 2) as avg_humidityFROM sensor_dataWHERE time > NOW() - INTERVAL '24 hours'GROUP BY hour, sensor_idORDER BY hour DESC, sensor_idLIMIT 20; -- Query 3: Daily statistics across all sensorsSELECT time_bucket('1 day', time) AS day, COUNT(*) as total_readings, ROUND(AVG(temperature)::numeric, 2) as avg_temp, ROUND(MIN(temperature)::numeric, 2) as min_temp, ROUND(MAX(temperature)::numeric, 2) as max_tempFROM sensor_dataGROUP BY dayORDER BY day DESCLIMIT 10; -- Query 4: Latest reading for each sensor-- Highlights the value of Skipscan executing in under 100ms without skipscan it takes over 5secSELECT DISTINCT ON (sensor_id) sensor_id, time, ROUND(temperature::numeric, 2) as temperature, ROUND(humidity::numeric, 2) as humidity, ROUND(pressure::numeric, 2) as pressureFROM sensor_dataORDER BY sensor_id, time DESC;
Notice how fast these analytical queries run, even with aggregations across millions of rows. This is the power of TimescaleDB's columnstore.
TimescaleDB automatically:
See more:
Now that you've got the basics, explore more:
Continuous aggregates make real-time analytics run faster on very large datasets. They continuously and incrementally refresh a query in the background, so that when you run such query, only the data that has changed needs to be computed, not the entire dataset. This is what makes them different from regular PostgreSQL materialized views, which cannot be incrementally materialized and have to be rebuilt from scratch every time you want to refresh them.
Let's create a continuous aggregate for hourly sensor statistics:
CREATE MATERIALIZED VIEW sensor_data_hourlyWITH (timescaledb.continuous) ASSELECT time_bucket('1 hour', time) AS hour, sensor_id, AVG(temperature) AS avg_temp, AVG(humidity) AS avg_humidity, AVG(pressure) AS avg_pressure, MIN(temperature) AS min_temp, MAX(temperature) AS max_temp, COUNT(*) AS reading_countFROM sensor_dataGROUP BY hour, sensor_id;
This creates a materialized view that pre-aggregates your sensor data into hourly buckets. The view is automatically populated with existing data.
To keep the continuous aggregate up-to-date as new data arrives, add a refresh policy:
SELECT add_continuous_aggregate_policy( 'sensor_data_hourly', start_offset => INTERVAL '3 hours', end_offset => INTERVAL '1 hour', schedule_interval => INTERVAL '1 hour');
This policy:
Now you can query the pre-aggregated data for much faster results:
-- Get hourly averages for the last 24 hoursSELECT hour, sensor_id, ROUND(avg_temp::numeric, 2) AS avg_temp, ROUND(avg_humidity::numeric, 2) AS avg_humidity, reading_countFROM sensor_data_hourlyWHERE hour > NOW() - INTERVAL '24 hours'ORDER BY hour DESC, sensor_idLIMIT 50;
Compare the performance difference:
-- Query the raw hypertable (slower on large datasets)\timing onSELECT time_bucket('1 hour', time) AS hour, AVG(temperature) AS avg_tempFROM sensor_dataWHERE time > NOW() - INTERVAL '60 days'GROUP BY hourORDER BY hour DESCLIMIT 24; -- Query the continuous aggregate (much faster)SELECT hour, avg_tempFROM sensor_data_hourlyWHERE hour > NOW() - INTERVAL '60 days'ORDER BY hour DESCLIMIT 24;
Notice how the continuous aggregate query is significantly faster, especially as your dataset grows!
See more:
Learn TimescaleDB with complete, standalone examples using real-world datasets. Each example includes sample data and analytical queries.
Or try some of our workshops
Tiger Cloud is the modern PostgreSQL data platform for all your applications. It enhances PostgreSQL to handle time series, events, real-time analytics, and vector search—all in a single database alongside transactional workloads. You get one system that handles live data ingestion, late and out-of-order updates, and low latency queries, with the performance, reliability, and scalability your app needs. Ideal for IoT, crypto, finance, SaaS, and a myriad other domains, Tiger Cloud allows you to build data-heavy, mission-critical apps while retaining the familiarity and reliability of PostgreSQL. See our whitepaper for a deep dive into Tiger Cloud's architecture and how it meets the needs of even the most demanding applications.
A Tiger Cloud service is a single optimized 100% PostgreSQL database instance that you use as is, or extend with capabilities specific to your business needs. The available capabilities are:
We welcome contributions to TimescaleDB! See Contributing and Code style guide for details.
Tiger Data is the fastest PostgreSQL for transactional, analytical and agentic workloads. To learn more about the company and its products, visit tigerdata.com.
# Check if container is runningdocker ps -a # View container logs (use the appropriate container name)# For one-line install:docker logs timescaledb-ha-pg18-quickstart# For manual Docker command:docker logs timescaledb # Stop and remove existing container# For one-line install:docker stop timescaledb-ha-pg18-quickstart && docker rm timescaledb-ha-pg18-quickstart# For manual Docker command:docker stop timescaledb && docker rm timescaledb # Start fresh# Option 1: Use the one-line installcurl -sL https://tsdb.co/start-local | sh# Option 2: Use manual Docker commanddocker run -d --name timescaledb -p 6543:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg18
docker pslsof -i :6543psql -h 127.0.0.1 -p 6543 -U postgresThe timescale/timescaledb-ha:pg18 image has TimescaleDB pre-installed and pre-loaded. If you see errors, ensure you're using the correct image.
When you're done experimenting:
# Stop the containerdocker stop timescaledb-ha-pg18-quickstart # Remove the containerdocker rm timescaledb-ha-pg18-quickstart # Remove the persistent data volumedocker volume rm timescaledb_data # (Optional) Remove the Docker imagedocker rmi timescale/timescaledb-ha:pg18
# Stop the containerdocker stop timescaledb # Remove the containerdocker rm timescaledb # (Optional) Remove the Docker imagedocker rmi timescale/timescaledb-ha:pg18
Note: If you created a named volume with the manual Docker command, you can remove it with docker volume rm <volume_name>.