111 points | by gnabgib1 month ago
BunnySDK.net.http.serve(async (request: Request): Response | Promise<Response> => {
return new Response("Hello World");
});
I love that pretty much all the JS runtimes have settled on `(Request): Response`[0], but I really wish they would standardize starting the server as well. Would make writing cross-runtime services easier.Seems pretty good on paper. There's no free allowance like you get with Workers though.
Another positive side effect would be to have paid dual redundancy then too.
Backblaze is another neighbour that plays nice with bunny.
Cloudflare is effectively impossible to compare because it's all "free until you get an email from sales".
I lead Supabase Edge Functions product, a similar offering built on top of Deno runtime too. We have open-sourced our runtime (https://github.com/supabase/edge-runtime), and it's self-hostable. It supports NPM, node built-ins, pluggable storage, and web sockets. We also have a built-in API for AI inference (https://supabase.com/blog/ai-inference-now-available-in-supa...)
Supabase Edge Runtime is easy to self-host (works great as a multi-threaded JS web server). We love community contributions :) Let us know if you would like to collaborate.
That's the one place where I can dock some points for Bunny.
If anybody from Bunny is reading this, what's the ETA?
These days "edge" more commonly refers to the "edge of the cloud", i.e. still a datacenter, just not in us-east-1.
Serverless also does not mean no servers, it means no sysadmins.
Serverless is definitely a misnomer, but it means that you don't 'own' the server your thing is running on, there are some restrictions and you can't run anything you could on an actual VPS or hardware box. So in a way the server is abstracted away. You just use resources, but those could be anywhere, running on any node of the edge network.
Turns out, they meant installing modems in people's houses. Edge, it would seem, is a very versatile buzzword.
Serverless really should mean the client does the work, but it seems pretty equivalent to shared hosting. Dreamhost (and the shell account you used to get with an ISP!) was serverless before it was cool?
I'm aware that what they usually mean is significantly less interesting.
Cloudflare is building an insanely good platform and I think it is one that is worth betting on into the future. I have no idea where this company came from. Maybe it's a rebrand, because they seem to have serious customer base and perhaps network footprint.
PoPs are ~119 which is significantly fewer (less than half) of Cloudflare's presence, and Cloudflare has queueing, streaming, D1 (databasing), R2, and all sorts of other things. Workers' DX cannot be beaten.
Just my 2c. If the creators are here, I'd love to know why you decided to design a new API. That is so upsetting.
Cloudflare doesn't execute workers in all their PoPs.
I'm in central Mexico and my workers execute in DFW even though there's a Cloudflare PoP not even 30 mins away from here (QRO).
Yes we do!
> I'm in central Mexico and my workers execute in DFW even though there's a Cloudflare PoP not even 30 mins away from here (QRO).
I think you will find that even if you turned off Workers, your site would still be routed to DFW. Some of our colos don't have enough capacity to serve all traffic in their local region, so we selectively serve a subset of sites from that colo and reroute others to a bigger colo further away. There are a lot of factors that go into the routing decision but generally sites on the free plan or lower plan levels are more likely to be rerouted. In any case, the routing has absolutely nothing to do with whether you are using Workers. Every single machine in our edge network runs Workers and is prepared to serve traffic for any site, should that traffic get routed there.
(Additionally, sometimes ISP network connectivity doesn't map to geography like you'd think. It's entirely possible that your ISP has better connectivity to our DFW location than the QRO location.)
The CDN does cache stuff on QRO often but Workers and KV are a completely different story.
We're not on the free plan. We pay both for Workers and the CF domain plan.
Maybe all PoPs have the technical capacity to run Workers but if for whatever reason they don't, then it's irrelevant.
I don't know of any way that requests to the same hostname could go to QRO for cache but not for Workers. Once the HTTP headers are parsed, if the URL matches a Worker, that Worker will run on the same machine. This could change in the future! We could probably gain some efficiency by coalescing requests for the same Worker onto fewer machines. But at present, we don't.
I do believe you that you haven't seen your Workers run in QRO, but the explanation for that has to be something unrelated to Workers itself. I don't know enough about your configuration to know what it might be.
> Not all sites will be in all cities. Generally you’re correct that Free sites may not be in some smaller PoPs depending on capacity and what peering relationships we have.
https://x.com/eastdakota/status/1254118993188642816
> The higher the plan the higher the priority, so if capacity is an issue (for whatever issue, from straight up usage to DDoSes) free sites will get dropped from specific locations sooner. Usually you will still maintain the main locations.
https://x.com/itsmatteomanf/status/1261028088919609352
So I ended up getting a paid plan but still the behavior hasn't changed. I've tried with different ISPs and locations and I've never seen a Worker executing in Mexico (QRO, GDL, MEX) or any of the other PoPs in the US closer than DFW (MFE, SAT, AUS, IAH).
I think it's pretty good, but yeah, not ideal. I'm also building a product on workers, and using D1, KV, R2, queues, and am pretty happy with the DX. Running remote previews is pretty neat.
If you read the article, Bunny uses Deno, CF uses a cut down version of Chromeium (each instance is like a browser tab; isolated). Thus the API difference.
But I do agree, CF is building out more of a suite.
WorkerD is open source: https://github.com/cloudflare/workerd
I personally am not a fan of Deno because of how it split the Node JS ecosystem, so that is not a benefit in my eyes. Of course, Workers can run Rust.
Nothing you said here necessitates an API difference.
Yeah, but the headache is usually from database, cache and other shared resource servers.
Scaling HTTP has been very easy for most applications for the last 15 years or so.
I have to confess I really don't see the appeal of edge workers in general outside of specific applications where latency is of high concern. Such applications do exist, of course, but this kind of offering is treated so generally that I feel like I'm either immune to the marketing or I'm missing something important.
I agree, it mostly seems like a fad/gimmick.
+ A/B testing + cookie warnings just for EU but not everyone else + proxy; helpful if you want to hide where your API is from or username/pass + route redirects + take off some workload from your server + mini applets (eg signup forms are great edge use-case)
ref: this is my old repo: https://github.com/lukeed/awesome-cloudflare-workers
I spent 2 evenings brainstorming this, but haven’t come up with anything.
Compared to cloudflare workers, which has free bandwidth, bunnys bandwidth is not that cheap at 0.01$/GB
So while their example suggests stream-encoding video is possible, it would probably be cost-prohibitive.
* Pun intended.
This is very exciting.
Anyone?
As far as app logic, it depends on how much you can get the workers to do in their allotted time (which is short, iirc) so yeah, imo you still need heavier resources in a DC.