But if you take the idea of a universal system for rewrite rules seriously, and add just enough semantics to make it work as a logic, you might arrive at abstraction logic [1]. That is actually a great way of explaining abstraction logic to computer scientists, come to think of it!
The question at hand is motivating and getting benchmarks for different approaches or engines to equational reasoning / rewriting. I sometimes lose sight of motivations and get discouraged. Doing associativity and commutativity and constant fusion for the nth time I start to become worried it's all useless.
1. AL is more general than first-order rewriting, because it has general binders, that is general variable binding constructs.
2. Unlike higher-order logic, which also has general binders, it doesn't have types, so rewrite rules in AL can be taken for face value, and have no hidden type annotations in them, just like in first-order rewriting (that is a consequence of AL having a single mathematical universe).
I must admit, I am quite excited about implementing rewriting for AL! My PhD father has co-authored a book about (mostly) first-order term rewriting [1]. Higher-order rewriting based on the lambda calculus is mentioned briefly in it (and of course implemented in Isabelle), and I am wondering if rewriting based on AL instead will bring new insights and/or techniques.
[1] Term Rewriting and All That. https://doi.org/10.1017/CBO9781139172752
1. JoGo's OBJ and Maude: algebras on a rewrite engine
https://cseweb.ucsd.edu/~goguen/sys/obj.html
2. SPECWARE: a Category Theory approach from SRI Kestrel Institute
https://www.specware.org/research/specware/
https://github.com/KestrelInstitute/Specware/blob/master/Lib...
But to be a bit more specific, I've been involved in the egraphs community https://github.com/philzook58/awesome-egraphs and we don't currently have a shared database of rewrite rules for benchmarking, nor do we collect the rules from different projects. Seeing the actual files is helpful.
On a slightly different front, I'm also trying to collect rules or interesting theories up in my python interactive theorem prover Knuckledragger https://github.com/philzook58/knuckledragger . Rewrite rule or equational theory looking things are easier to work with than things with deeply nested notions of quantifiers or tons of typeclass / mathematical abstraction like you find when you try to translate out of Coq, Lean, Isabelle.
There are also different approaches to declarative rewrite rules in major compilers. GCC, LLVM, and cranelift have their own declarative rewrite rule systems for describing instruction selection and peephole optimizations but also of course lots of code that programatically does this. I also want to collate these sorts of things. Working on fun clean systems while not confronting the ugly realities of the real and useful world is ultimately empty feeling. Science is about observing and systematizing. Computer science ought to include occasionally observing what people actually do or need.
https://fosdem.org/2025/schedule/event/fosdem-2025-6839-ligh...
Also see main working page:
https://www.earth.org.uk/RSS-efficiency.html
I can point you to bits of config (it's a bit scattered in my blogs at the moment) like this:
https://www.earth.org.uk/note-on-site-technicals-83.html#202...
One interesting case for `f` is the gray code conversion, whose inverse requires a (finite) loop. This can be generalized to shifts other than 1; this is common in components of RNGs.
1: use rust
x = x + x - x
x = x + x - x + x - x
...
you get the point
this isn't deep - there's a reason why ZFC stands out amongst all possible sets of "foundations" - it's because in lieu of an analytic cost function (which isn't possible) we have consensus.
It's decidable whether a TRS is confluent, i.e. if a fixpoint exists, exactly one fixpoint exists. Or what do you mean?
What you're saying is akin to "not all functions are interesting". Well, yes...
But that's true for any model of computation.