209 points | by exAspArk1 month ago
I cant imagine until at what scale can you do this and is there anything better we can do before using debezium to sync the data via cdc?
Edit: add code permalink
I planned initially to do chunks on S3 and do the analytical queries using duckdb, I'm wondering if your tool would be a good replacement?
For now I don't have that many analytical queries, I'm mostly doing visualization of the data points by querying a range (eg last 2 weeks of data for a device)
Does it then make sense to use columnar storage or am I better off with "regular Postgres"?
Or in my case does your approach provide "best of both worlds" in the sense that I could do some occasional analytical queries on past data stored on S3, and regularly access "last 3 months" data for visualization using the data stored in the regular Postgres?
However, DuckDB (non-profit foundation) != MotherDuck (VC funded). These are two separate organizations with different goals. I see DuckDB as a tool, not as a SaaS or a VC-funded company. My hope is that it'll be adopted by other projects and not associated with just a single for-profit company.
It:
- Auto-archives old Postgres data to Parquet files on S3
- Keeps recent data (default 90 days) in Postgres for fast viz queries
- Uses year/month partitioning in S3 for basic analytical queries
- Configures with just PG connection string and S3 bucket
Currently batch-only archival (no real-time sync yet). Much lighter than running a full analytical DB if you mainly need timeseries visualization with occasional historical analysis.
Let me know if you try it out!
- can you then easily query it with duckdb / clickhouse / something else? What do you use yourself? do you have some tutorial / toy example to check?
- would it be complicated to have the real-time data be also stored somehow on S3 so it would be "transparent" to do query on historical data which includes day data?
- what typical "batch data" size makes sense, I guess doing "day batches" might be a bit small and will incurr too many "read" operations (if I have moderate amount of day data), rather than "week batches"? but then the "timelag" increases?
Querying the data: Yes, you can easily query the Parquet files with DuckDB. The files are stored in a year/month partitioned structure (e.g., year=2024/month=03/iot_data_20240315_143022.parquet), which makes it efficient to query specific time ranges. I personally use DuckDB for ad-hoc analysis since it works great with Parquet. Here's a quick example:
sqlCopySELECT * FROM read_parquet('s3://my-iot-archive/year=/month=/iot_data_*.parquet') WHERE timestamp BETWEEN '2023-01-01' AND '2023-12-31'
Real-time data on S3: Currently, the tool is batch-focused. Adding real-time sync would require some architectural changes - either using CDC (Change Data Capture) or implementing a dual-write pattern. I kept it simple for now since most IoT visualization use cases I've seen focus on recent data in Postgres. If you need this feature, I'd be happy to take a look at what you want.
For data processing the tool works like this:
It identifies records older than 90 days (configurable retention period)
Processes these records in batches of 100 (also configurable) to manage memory usage
Creates Parquet files partitioned by year/month in S3
Deletes the archived records from Postgres
The key is that you always have your recent 90 days in Postgres for fast querying, while maintaining older data in a cost-effective S3 storage that you can still query when needed. You can adjust both the retention period and batch size based on your specific needs.
Let me know if you'd like me to clarify anything or if you have other questions!
sqlCopyCOPY (SELECT * FROM iot_data WHERE timestamp > current_date - interval '90 days')
TO 's3://bucket/recent/iot_data.parquet' (FORMAT 'parquet')
Then query everything together in DuckDB:
sqlCopySELECT * FROM read_parquet([
's3://bucket/year=*/month=*/iot_data_\*.parquet', -- archived data
's3://bucket/recent/iot_data.parquet' -- recent data
])Much simpler than implementing real-time sync, and you still get a unified view of all your data for analysis (just with a small delay on recent data).
Yes, absolutely!
1) You could use BemiDB to sync your Postgres data (e.g., partition time-series tables) to S3 in Iceberg format. Iceberg is essentially a "table" abstraction on top of columnar Parquet data files with a schema, history, etc.
2) If you don't need strong consistency and fine with delayed data (the main trade-off), you can use just BemiDB to query and visualize all data directly from S3. From a query perspective, it's like DuckDB that talks Postgres (wire protocol).
Feel free to give it a try! And although it's a new project, we plan to keep building and improving it based on user feedback.
- Can you give me more info about the strong consistency and delayed data, so I can better picture it with a few examples?
- Also, is it possible to do the sync with the columnar data in "more-or-less real-time" (eg do a NOTIFY on a new write in my IoT events table, and push in the storage?)
- Would your system also be suited for a kind of "audit-log" data? Eg. if I want to have some kind of audit-table of all the changes in my database, but only want to keep a few weeks worth at hand, and then push the rest on S3, or it doesn't make much sense with that kind of data?
We use logical replication and this exact approach with our other project related to auditing and storing Postgres data changes https://github.com/BemiHQ/bemi. We're thinking about combining these approaches to leverage scalable and affordable separated storage layer on S3.
Lmk if that makes sense or if you had any more questions!
Ideally it would just sync in real-time and buffer new data in the Bemi binary (with some WAL-like storage to make sure data is preserved on binary crash/reload), and when it has enough, push them on S3, etc
Is this the kind of approach you're going to take?
1. All the benchmarks/most of the companies, show one time data exists and try querying/compressing in different formats which is far from reality
2. Do you rewrite parquet data every time new data comes? Or partitioned by something? No examples
3. How does update/delete works. Update might be niche case. But deletion/data retention/truncation is must and I don't see how you support that
We load data from postgres tables that are used to build Clickhouse Dictionaries (a hash table for JOIN-ish operations).
The big tables do not arrive via real-time-ish sync from postgres but are bulk-appended using a separate infrastructure.
> (multi-TB databases under load) is where logical replication won't be able to sync your tables in time
I think the ceiling for logical replication (and optimization techniques around it) is quite high. But I wonder what people do when it doesn't work and scale?
I think your project is great. I suspect incremental updates will be a big feature for most uptake (one we would need to try this out at least).
While it’s technically true that it’s an OSI license it’s mostly used to scare away competing cloud vendors from hosting the software, which isn’t in spirit of OSS.
Have you looked into the more modern choices?
Like the Business Source License that MariaDB created and uses or the Functional Source License that Sentry created as an improvement over the Business Source License? https://fsl.software/
Both those licenses have a fair source phase that automatically resolves into an open source phase over time.
Thus one gets the best of two worlds: An honest descriptive license for protecting one’s business model + a normal permissive OSS license that ensures longevity and prevents lock-in.
AGPL couldn’t be more in the spirit of OSS. The entire free software movement started to defend the _users_ freedom, not individual companies’.
AGPL is a poor “fair source” license and a controversial OSS license.
AGPL in “fair source” projects are always paired with Contributor License Agreements etc that ensures the company behind it owns all the rights to the project and can re-license it however they want without having to abide by AGPL themselves. Which is not in the spirit of OSS at all. And if the company goes out of business, then AGPL makes it really hard for anyone else to eventually pick up and continue on the “fair source” business model of the project.
Using a proper “fair source” license like the Business Source License or the Functional Source License will make for a better and more honest “fair source” phase of the project and both those licenses will resolve to non-controversial OSS licenses over time.
So:
- Is “fair source” in the spirit of “open source”? No
- Is AGPL an “open source” license? Yes
- Is AGPL often used to OSS-wash “fair source” projects? Yes
- Are all AGPL projects “fair source” projects? No, there exists proper OSS-projects with AGPL licenses
- Is AGPL in startup projects almost always paired with CLA:s that makes the startup play by different rules than everyone else? Yes
- Is it more in the spirit of “open source” or “fair source” to have the startup play by different rules than everyone else? In the spirit of “fair source”
- Is AGPL a good choice for “fair source” projects? No, it perpetuates the different rules between the startup and the community forever, even after the startup has ceased to exist, whereas proper “fair source” licenses convert to less extreme OSS licenses over time
Edit: The only company that I know that does an AGPL project in an “open source” spirit rather than a “fair source” spirit is Sourcehut: https://sourcehut.org/blog/2022-10-09-ip-assignment-or-lack-...
Would you be able to share why AGPL license is a no-go for you? I'm genuinely curious about your use case. In simple words, it'd require a company to open source their BemiDB code only if they made modifications and were distributing it to other users (allowing modifications and using it internally without any restrictions)
Because AGPL itself can not be relicensed and it even needed special wording when created to become GPL-compliant.
Since you’re a startup I believe that you use AGPL to achieve the “fair source” idea (https://fair.io/) – where you yourself can provide a hosted service without providing all your source code while hoping others won’t be as they will need to provide all theirs.
In simplified terms: It’s an anti-AWS defense. Helping you avoid being outcompeted by a big cloud vendor using your project without paying anything for it.
And AGPL is a poor “fair source” license, especially as it was never designed to be a “fair source” license but also since it doesn’t give the impression of “fair source” but rather the impression of “open source”, giving an almost deceptive and dishonest look.
And since AGPL is perpetual, unlike eg BSL and FSL, one is stuck with the “fair source” license forever, rather than having it be converted into a mainstream OSS license over time. It can eg make it hard for someone else to eventually, if your company goes away and the project gets abandoned (which is sadly the most likely outcome of most startups), pick up the project and build a company around that while using similar “fair source” principles like you.
If you are doing like what SourceHut is doing (https://sourcehut.org/blog/2022-10-09-ip-assignment-or-lack-...) and going all in on AGPL and playing by its rules yourself as well and treating it as “open source” rather than “fair source”, then well done!
I still would likely want to avoid the legalese complexity of AGPL in my stack though and try to generally stick to permissive licenses and the occasional GPL-licensed projects, like eg Linux. And eg Google has similar guidelines when using OSS-code internally.
I'm not crying that "it's not for you".
I want to keep the legalese in my setups at a manageable level, avoid needless lock-ins (AGPL is one of the least compatible licenses out there, code from an AGPL project can only ever be used in another AGPL project) and have it be clear when I contribute to a proprietary project in disguise (which most startup AGPL projects are, thanks to CLA:s) and when I contribute to an actual OSS project
A couple questions, if you have time:
1. How do you guys handle multi-dimensional arrays? I've had issues with a few postgres-facing interfaces (libraries or middleware) where they believe everything is a 1D array!
2. I saw you are using pg_duckdb/duckdb under the hood. I've had issues calling plain-SQL functions defined on the postgres server, when duckdb is involved. Does BemiDB support them?
Thanks for sharing, and good luck with it!
Great questions:
1. We currently don't support multi-dimensional arrays, but we plan to add support for such complex data structures.
2. Would you be able to share what type of user-defined functions are these, do they do modify the data or read it?
Out of interest, do you know any good resources covering the current state of data engineering? I find the area quite impenetrable compared to software engineering. Almost like much of it is trade secrets and passed down knowledge and none of it written down.
You're right, the data engineering world is complex, constantly evolving, and has many various solutions. I'd also like to know about any good resources that people use :)
For us, we mostly talked to many potential users asking about their data setups and challenges, and had many conversations with friends and experts in this field. I also read a few weekly newsletters, substracks, and follow people in this space on X (many recently started posting on Bluesky). For a deeper research, reading docs and specs, experimenting, watching talks, listening to podcasts, reading subreddits, etc.
And can you then have Glue Catalog auto-crawl them and expose them in Athena? Or are they DuckDB-managed Iceberg tables essentially?
The Iceberg tables are created separately from the DuckDB query engine. So you should be able to read these Iceberg tables by using any other Iceberg-compatible tools and services like AWS Athena.
Though a little sceptical of embedding DuckDB. It is easy and better to isolate Read/Write paths, and it has a lot of other benefits.
We actually separate Read/Write paths. BemiDB reads by levering DuckDB as a query engine. And it writes to Iceberg completely separately from DuckDB. I'm curious if that's what you imagined.
Smart. Imma test this out for sure.
The biggest difference is in a Postgres extension vs a separate OLAP process. We want to allow anyone with just Postgres to be able to perform analytics queries without affecting resources in the transactional database, building and installing extensions (might not be possible with some hosting providers), dealing with dependencies and their versions when upgrading Postgres, manually syncing data from Postgres to S3, etc.
How does it handle updates?
The pg_analytics Postgres extension partially supports different file formats. We bet big on Iceberg open table format, which uses Parquet data files under the hood.
Our initial approach is to do periodic full table resyncing. The next step is to support incremental Iceberg operations like updates. This will involve creating a new "diff" Parquet file and using the Iceberg metadata to point to the new file version that changes some rows. Later this will enable time travel queries, schema evolution, etc.
How does the latency of Iceberg-on-S3 compare to say an EBS volume?
Could I easily point it to Iceberg tables on another storage target?
BemiDB natively supports two storage layers, a local disk and S3 (we assumed that most people would choose this in production environments to simplify management).
When I query Iceberg tables stored on SSD, it works superfast.
How will it be created differently for columnar access?
The BemiDB storage layer produced ~300MB columnar Parquet files (with ZSTD compression) vs 1.6GB of data in Postgres.
Edit: I seemingly don't have these benchmarks anymore, and I'm not going to re-run them now, but I found a very (_very_) roughly similar SF10 run clocking in around seven minutes total. So that's the order of magnitude I would be expecting, given ten times as much data.
> DuckDB:
> - Designed for OLAP use cases. Easy to run with a single binary.
> - Limited support in the data ecosystem (notebooks, BI tools, etc.). Requires manual data syncing and schema mapping for best performance.
Here is the link that briefly describes pros and cons of different alternatives for analytics https://github.com/BemiHQ/BemiDB#alternatives
DuckDB is neat, and I understand why a company like BemiDB would build their product on top of it, but as a prospective customer embedded databases are a weird choice for serious workloads when there are other good open-source solutions like Clickhouse available.
I can imagine this product is a very elegant solution for many types of companies/teams/workloads.
Example:
CREATE DATABASE test;
USE test;
CREATE TABLE hackernews_history UUID '66491946-56e3-4790-a112-d2dc3963e68a'
(
`update_time` DateTime DEFAULT now(),
`id` UInt32,
`deleted` UInt8,
`type` Enum8('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5),
`by` LowCardinality(String),
`time` DateTime,
`text` String,
`dead` UInt8,
`parent` UInt32,
`poll` UInt32,
`kids` Array(UInt32),
`url` String,
`score` Int32,
`title` String,
`parts` Array(UInt32),
`descendants` Int32
)
ENGINE = ReplacingMergeTree(update_time)
ORDER BY id
SETTINGS disk = disk(readonly = true, type = 's3_plain_rewritable', endpoint = 'https://clicklake-test-2.s3.eu-central-1.amazonaws.com/', use_environment_credentials = false);
And you can try it right now.Install ClickHouse:
curl https://clickhouse.com/ | sh
./clickhouse local
Run the query above to attach the table.The table is updated in real time. For example, here is your comment:
:) SELECT * FROM hackernews_history WHERE text LIKE '%Clickhouse is amazing%' ORDER BY update_time \G
Row 1:
──────
update_time: 2024-04-06 16:35:28
id: 39785472
deleted: 0
type: comment
by: mightybyte
time: 2024-03-21 22:59:20
text: I'll second this. Clickhouse is amazing. I was actually using it today to query some CSV files. I had to refresh my memory on the syntax so if anyone is interested:<p><pre><code> clickhouse local -q "SELECT foo, sum(bar) FROM file('foobar.csv', CSV) GROUP BY foo FORMAT Pretty"
</code></pre>
Way easier than opening in Excel and creating a pivot table which was my previous workflow.<p>Here's a list of the different input and output formats that it supports.<p><a href="https://clickhouse.com/docs/en/interfaces/formats" rel="nofollow">https://clickhouse.com/docs/en/interfaces/formats</a>
dead: 0
parent: 39784942
poll: 0
kids: [39788575]
url:
score: 0
title:
parts: []
descendants: 0
Row 2:
──────
update_time: 2024-04-06 18:07:34
id: 31334599
deleted: 0
type: comment
by: richieartoul
time: 2022-05-11 00:54:31
text: Not really. Clickhouse is amazing, but if you want to run it at massive scale you’ll have to invest a lot into sharding and clustering and all that. Druid is more distributed by default, but doesn’t support as sophisticated of queries as Clickhouse does.<p>Neither Clickhouse nor Druid can hold a candle to what Snowflake can do in terms of query capabilities, as well as the flexibility and richness of their product.<p>That’s just scratching the surface. They’re completely different product categories IMO, although they have a lot of technical / architectural overlap depending on how much you squint.<p>Devil is in the details basically.
dead: 0
parent: 31334527
poll: 0
kids: [31334736]
url:
score: 0
title:
parts: []
descendants: 0
Row 3:
──────
update_time: 2024-11-07 22:29:09
id: 42081672
deleted: 0
type: comment
by: maxmcd
time: 2024-11-07 22:13:12
text: Using duckdb and apache iceberg means that you can run read replicas without any operational burden. Clickhouse is amazing, but they do not allow you to mount dumb read replicas to object storage (yet).<p>I can imagine this product is a very elegant solution for many types of companies/teams/workloads.
dead: 0
parent: 42080385
poll: 0
kids: []
url:
score: 0
title:
parts: []
descendants: 0
3 rows in set. Elapsed: 3.981 sec. Processed 42.27 million rows, 14.45 GB (10.62 million rows/s., 3.63 GB/s.)
Peak memory usage: 579.26 MiB.
Query id: daa202a3-874c-4a68-9e3c-974560ba4624
Elapsed: 0.092 sec.
Received exception: Code: 499. DB::Exception: The AWS Access Key Id you provided does not exist in our records. (Code: 23, S3 exception: 'InvalidAccessKeyId'): While processing disk(readonly = true, type = 's3_plain_rewritable', endpoint = 'https://clicklake-test-2.s3.eu-central-1.amazonaws.com/', use_environment_credentials = false). (S3_ERROR)
And how does the real-time update work? Could I make it so that my latest data is incrementally sync'd on S3 (eg "the last 3-months block" is incrementally updated efficiently each time there is new data) ?
Do you have example code / setup for this?
To insert data into ClickHouse, you use the INSERT query to insert data as frequently as you'd like.
Alternatively, you can set up continuous replication from Postgres to ClickHouse, which is available in ClickHouse Cloud.
Since you were running $(./clickhouse local) does that mean the query downloaded 14.45GB out of S3 to your machine? The 3.981s seems to imply "no," but I struggle to think what meaning that output would otherwise try to convey
ReadBufferFromS3Bytes 8.95 GB
The amount of compressed data read from S3 is 8.95 GB. Note: it sounds quite large, interesting why compression is less than 2x on this dataset. Most likely, it uses just lz4.I recommend trying it because it works everywhere. You can run ClickHouse on your laptop, on a VM, or even in Cloudshell in AWS.
What’s with the avoidance of clickhouse or duckdb paired with insanely fast EBS or even physically attached storage? You can still backup to s3, but using s3 for live analytics queries is missing out on so much of the speed.
- Compute and storage separation simplifies managing a system making compute "ephemeral"
- Compute resources can be scaled separately without worrying about scaling storage
- Object storage provides much higher durability (99.999999999% on S3) compared to disks
- Open table formats on S3 become a universal interface in the data space allowing to bring many other data tools if necessary
- Costs at scale can actually be lower since there is no data transfer cost within the same region. For example, you can check out WarpStream (Kafka on object storage) case studies that claim saving 5-10x
S3 is the cheapest, fully managed storage you can get that can scale infinitely. When you're already archiving to S3, doubling it for analytics saves cost and simplifies data management.
Clickhouse is amazing but I still struggle getting it working efficiently on s3, especially writes.
Clickhouse on local NVMe is one possible solution, but then you are married to that solution. An S3 interface is more universal and allows you to mix and match your tools, even though this comes at some expense.
And ideally have the whole thing open source and be able to run it in CI
We tried peerdb + clickhouse but Clickhouse materialized views are not refreshed when joining tables.
Right now we’re back to standard materialized views inside Postgres refreshed once a day but the full refreshes are pretty slow… the operational side is great though, a single db to manage.
We love the pg_moooncake extension (and pg_duckdb used under the hood). Although our approaches are slightly different. Long-term, we want to allow anyone to use BemiDB by using native Postgres logical replication without installing any extensions (many Postgres hosting providers impose their restrictions, upgrading versions might be challenging, OLAP queries may affect OLTP performance if within the same database, etc.)
P.S The integration with something like Neon is really cool to see.