16 comments

  • rockwotj1 day ago
    lots of Go based CDC stuff going on these days. Redpanda Connect (formerly benthos) recently added support for Postgres CDC [1] and MySQL is coming soon too [2].

    1: https://github.com/redpanda-data/connect/pull/2917

    2: https://github.com/redpanda-data/connect/pull/3014

    • andruby1 day ago
      Sequinstream is written in Elixir and also pretty recent.

      https://github.com/sequinstream/sequin

      Any reason we're seeing so many CDC tools pop up?

      • jitl1 day ago
        The status quo tool Debezium is annoying/heavy because it’s a banana that comes attached to the Java, Kafka Connect, Zookeeper jungle - it’s a massive ecosystem and dependency chain you need to buy into. The Kafka clients outside of Java-land I’ve looked are all sketchy - in Node, KafkaJS went in maintained for years, Confluent recently started maintaining rdkafka-based client that’s somehow slower than the pure JS one and breaks every time I try to upgrade it. The Rust Kafka client has months-old issues in the latest release where half the messages go missing and APIs seem to no-op, and any version will SIGSEGV if you hold it wrong - obviously memory unsafe. The official rdkafka Go client depends on system C library package versions “matching up” meaning you often need a newer librdkafka and libsasl which is annoying; the unofficial pure-go one looks decent though.

        Overall the Confluent ecosystem feels targeted at “data engineer” use-cases so if you want to build a reactive product it’s not a great fit. I’m not sure what the performance target is of the Debezium Postgres connector maintainers but I get the sense they’re not ambitious because there’s so little documentation about performance optimization; data ecosystem feels contemporary with “nightly batch job” kind of thing vs product people today who want 0ms latency.

        If you look at backend infrastructure there’s a clear trope of “good idea implemented in Java becomes standard, but re-implementing in $AOT_COMPILED_LANGUAGE gives big advantage:

        - Cassandra -> ScyllaDB

        - Kafka -> RedPanda, …

        - Zookeeper -> ClickHouse Keeper, Consul, etcd, …

        - Debezium -> All these thingies

        There’s also a lot of hype around Postgres right now, so a bit of VC funded Cambrian explosion going on and I think a lot these will die off as a clear winner emerges.

      • nijave11 hours ago
        >Any reason we're seeing so many CDC tools pop up?

        When I looked for something ~1 year ago to dump to S3 (object storage) they all sucked in some way.

        I'm also of the opinion Postgres gives you a pretty "raw" interface with logical replication so a decent amount of building is needed and each person is going to have slightly different requirements/goals.

        I haven't looked recently but hopefully these do a better job handling edge cases like TOASTd values, schema changes, and ideally full load

    • mebcitto1 day ago
      From the newish Go-based Postgres CDC tools, I know about:

      * pgstream: https://github.com/xataio/pgstream

      * pg_flo: https://github.com/pgflo/pg_flo

      Are there others? Each of them has slightly different angles and messaging, but it is interesting to see.

  • dewey1 day ago
    I’ve recently looked into tools like that, I have a busy Postgres table that has a lot of updates on one column and it’s overwhelming Debezium.

    I’ve tried many things and looked into excluding them from replication with a publication filter but this still causes “events”.

    Anyone has some pointers on CDC on busy tables?

    • hundredwatt1 day ago
      That’s one of the cases where query-based CDC may out perform log-based (as long as you don’t care to see every intermediate change that happened to a row between syncs)
  • mitchbregs1 day ago
    Good to see another Postgres CDC solution. I have used both Debezium and PeerDB before. Currently, I am using PeerDB in my work to replicate data to ClickHouse and have been loving the experience. The pace at which it performs the initial load is impressive, and the replication latency to ClickHouse is just a few seconds. I haven’t tested PeerDB other targets, such as Kafka, where the latency might be lower.
  • gregw21 day ago
    I like the "type=history" mode which can auto-build a slowly changing dimension ("SCD type 2") for you; more CDC solutions should do that: https://streamer.kuvasz.io/streaming-modes/

    That said, their implementation is kinda poor since it allows overlapping dates for queries when a row gets updated multiple times per day. When you SQL join to that kind of SCD2 by a given date you can easily get duplicates.

    This can be avoided by A) updating old rows to end-date yesterday rather than today, and B) if a row begins and ends on the same day, the start date or end date can be NULL or a hardcoded ancient or far-future end-date, such as having the record from "2023-01-01 to 2023-01-01" instead be "2023-01-01 to 0001-01-01". Those rows won't show up in joins, but the change remains visible/auditable and you do get the last row available for every given date (and only one such row.)

  • datadeft1 day ago
    Kuvasz is a Hungarian dog breed if anyone is wondering.
  • davidarenas1 day ago
    What kind of delivery guarantees does this offer? And does it provide data replay?

    Currently evaluating https://sequinstream.com/ which claims sub-200ms latency, but has a lot of extras that I don’t need and a lighter weight alternative would be nice.

    • jitl1 day ago
      I looked at sequin just now and wish they published their benchmark code. I’m curious how they configure Debezium. At 500 change/s, I get ~90ms average latency with my Debezium cluster after fiddling with Debezium options a bunch.

      I don’t love Debezium but I also don’t love Erlang, plenty of us have scars from RabbitMQ’s weird sharp edges…

      • _acco1 day ago
        Sequin engineer here. We'll publish our benchmark repo soon! Indeed, we're still doing a lot of fiddling with Debezium ourselves to make sure we cover different configurations, deployments, etc.

        The main thing we want to communicate is that we're able to keep up with workloads Debezium can handle.

        (And, re: RabbitMQ, I wouldn't write off a platform based on a single application built on that platform :) )

    • __s1 day ago
      You can get single digit latency with almost anything for CDC to queue if you properly colocate your services & your CDC ingestion is keeping up with postgres slot
      • davidarenas1 day ago
        Should have specified; this is for tracking bursts of up to 1.5k tps not in the same AZ or even VPC.
  • CAP_NET_ADMIN2 days ago
    Does anyone know a battle-tested tool that would help with (almost)online migrations of postgresql servers to other hosts? I know it can be done by manually, but I'd like to avoid that
    • nijave10 hours ago
      In my experience, everyone's setup is slightly different so it's hard to find a generic solution. pgcopydb is pretty good

      I can't remember the name but I saw a Ruby based tool on Hacker News a few months ago that'd automate logical rep setup and failover for you

    • andruby1 day ago
      PG's internal wal-level replication? primary to a read-replica, and then switch the read to become the primary. You'll have a bit of downtime while you stop connections on the original server, switch the new server to primary, and update your app config to connect to the new server.

      I believe that's a pretty standard way to provide "HA" postgres. (We use Patroni for our HA setup)

      https://github.com/patroni/patroni

      • dikei1 day ago
        We use the same setup, though we use PGBouncer so after switching primary we just force reconnect all clients from PGBouncer instead.

        The clients will have to retry on-going transactions, but that's a basic fault tolerant requirement anyway.

    • hans_castorp1 day ago
      pglogical can do that (or at least minimize the manual steps as much as possible)

      I am not entirely sure, but I think CloudNativePG (a Kubernetes operator) can also be used for that.

  • zsoltkacsandi2 days ago
    From the domain I suspect it’s a fellow Hungarian developer?
    • _zoltan_1 day ago
      I was surprised to learn: no. so why pick a Hungarian and? A mystery.
  • boomskats1 day ago
    Are there any benchmarks documented for this, possibly comparing it to alternatives like pgstream or debezium? The "Test report" link on the website[0] returns a 404.

    [0]: https://streamer.kuvasz.io/report.html

  • mdaniel1 day ago
    One will want to be cognizant of its AGPLv3 license https://github.com/kuvasz-io/kuvasz-streamer/blob/v1.19.2/LI...
    • withinboredom1 day ago
      This is becoming pretty standard for these types of projects that want to be open source but don't want to end up finding out their product got sucked up by AWS and friends. Usually, they offer a commercial license as well.
      • immibis1 day ago
        It should have always been standard, but corps like AWS managed to convince developers to give them free labour, for a while.
    • 1oooqooq20 hours ago
      why would you want to keep infra helper code private anyway? it's this core for your business?
  • rbanffy1 day ago
    When I was a teenager, my uncle had a Kuvasz. She was the most loving dog I ever met, and would easily win a Miss Congeniality contest against any Golden Retriever.
  • arcticfox1 day ago
    Looks nice! How does this compare to PeerDB?
  • rednafi2 days ago
    Love seeing a CDC tool that doesn't aim to replicate the universe and fails to do any of it correctly.
  • olavgg1 day ago
    How does it compare versus Debezium?
    • dikei1 day ago
      I'd be more similar to Debezium-server that runs everything in the same process, than regular KafkaConnect-based Debezium.

      However, this only does postgres-postgres, so it's a lot more limited compared to Debezium.

  • sushidev2 days ago
    Seems very useful. This stuff can’t be done already with pg replication?
    • jitl1 day ago
      I think it’s targeted at [many -> one] database consolidation versus Postgres replication which is more suited for [one -> one]. I’m sure you can do [many -> one] with just Postgres and some shenanigans but it’s probably quite Rube Goldberg. Also in Postgres <16 you can’t replica of replica, using a CDC tool to pipe data around you don’t have that restriction.
  • [flagged]
    • 1 day ago
      undefined