I know this type of work can be challenging to say the least. My own dabble with Zig and Pinephone hardware drivers reminded me of some of the pain of poorly documented hardware, but what a reward when it works.
My own M1 was only purchased because of this project and Alyssa's efforts with OpenGL+ES. It only ever boots Asahi Linux. Thank-you very much for your efforts.
Second, since it's open source, Apple themselves are probably paying attention; I didn't read the whole thing because it's going over my head, but she discussed missing features in the chip that are being worked around.
When considering longevity, I will agree that Thinkpads are probably the only device that can compete with MacBooks. But there are trade-offs.
MacBooks are going to be lighter, better battery life, and have better displays. Not to mention MacOS, which is my preferred OS.
Thinkpads usually have great Linux support and swappable hardware for those who like to tinker with their devices. They also tend to be more durable, but this adds more weight.
This is no longer true for me. I've been an Apple fan since the Apple ][ days, and reluctantly left the ecosystem last year. The hardware walled garden with soldered-on components and components tied down to specific units for ostensible privacy and security reasons (I don't buy those reasons), combined with the steadily degrading OS polish in fine attention to detail, for me personally, meant I could no longer justify the cognitive load to continue with a Mac laptop as my daily driver. While others might point to a cost or/or value differential, I'm in the highly privileged position to be insensitive to those factors.
Last straw was an board-soldered SSD that quit well before I was willing to upgrade, and even Louis Rossman's shop said it would cost way more to desolder and solder a new one on than the entire laptop is worth. Bought a Framework the same day, when it arrived I restored my data files to it and been running it as my daily driver ever since. The Mac laptop is still sitting here, as I keep hoping to figure out when to find time to develop my wave soldering skills to try my hand at saving it from the landfill, or break down and unsustainably pay for the repair (I do what I can to avoid perpetuating dark patterns, but it is a Sisyphean effort).
I found myself in a position of having to think increasingly more about working around the Mac ecosystem instead of working invisibly within it (like a fish in water not having to think about water), that it no longer made sense to stick with it. It has definitively lost the "It Just Works" polish that bound me so tightly to the ecosystem in the past. I see no functional difference in my daily work patterns using a Mac laptop versus a Framework running Fedora.
To be sure, there are a lot of areas I have to work around on the Framework-Fedora daily driver, but for my personal work patterns, in my personal needs, I evaluated them to be roughly the same amount of time and cognitive load I spent on the Mac. Maybe Framework-Fedora is slightly worse, but close enough that I'd rather throw my hat into the more open ring than the increasingly closed walled garden Apple's direction definitely is taking us, that does not align with my vision for our computing future. It does not hurt that experimenting with local LLM's and various DevOps tooling for my work's Linux-based infrastructure is way easier and frictionless on Fedora for me, though YMMV for certain. It has already been and will be an interesting journey, it has been fun so far and brought back some fond memories of my early Apple ][, Macintosh 128K, and Mac OS X days.
I still have an old iPhone 8 that I test with that runs well, I’ve had numerous Android devices die in that timeframe, slow to a crawl, or at best their performance is erratic.
You also don’t have to get the base model. You can stay under $1000 while increasing to 24gb of ram.
Memory (both system and GPU) is usually best thing to future proof a computer at buy time, especially as it's not user-replaceable anymore.
It got worse with Gen4/5 which now have an awful hump (reverse notch) like a smartphone.
The long usage of the X220 depends on the built quality but also on the five year replacement part support. New batteries and a new palm rest (cracked during a journey). It not just quality you pay for, it is this support level. And of course more memory. Apple still fails in this regard and barely does something when forced by the European Union. Anyway - Apple doesn’t officially support Linux therefore I cannot buy them for work.
This is the part wich saddens me, they do good work and the next MacBook will also not run fully with Linux. This kind of catchup things by hackers cannot be won - until the vendor decides you’re a valuable customer. Therefore, don’t buy them that you can run Linux. You maybe can. But these devices are made for macOS only.
But if you want to run Linux on a MacBook? Talk your politicians! And send „messages“ with your money to Apple. Like buying ThinkPads, Dells Developer Edition, Purism, System 76 and so on :)
Just curious, how does it feel better? My framework apparently has an aluminium lid and a magnesium base, and the mg feels “smoother” than the slightly more textured al… however my iPad is apparently aluminium too and is smooth to the touch.
The hardware on Thinkpad T-models should last longer than just 5 years in general.
My daily-driver laptop at home is a T420 from 2011 with a Core 2 Duo, SSD and 8GB RAM. Works fine still.
I run Linux + OpenBox, so it is a somewhat lightweight setup to be fair.
I am not sure I would be productive with that. Any Core 2 Duo is 10x slower single core and 20x slower multi-core than a current generation laptop CPU at this point.
Eg: https://browser.geekbench.com/v6/cpu/compare/8588187?baselin...
I think it would mostly be good as an SSH terminal, but doing any real work locally on it seems frankly unfeasible.
I do development and DevOps on it. Sure there are some intense workloads that I probably couldn’t run, but it works just fine as my daily driver.
I also have a corporate/work laptop from Dell with 32GB RAM, 16 cores @ 4.x GHz etc. - a beast - but it runs Windows (+ antivirus, group policy crap etc.) and is slower in many aspects.
Sure I can compile a single file faster and spin up more pods/containers etc. on the Dell laptop, but I am usually not constrained on my T420.
I generally don’t spend much time waiting for my machine to finish things, compared to the time I spend e.g. writing text/code/whatever.
So, yes, a lot of this comes down to software and a massive waste of cycles. I remember one bug in Electron/Atom where a blinking cursor caused like 10% CPU load or something along that line. They fixed it, but it tells you way more about how broken the entire software stack was at that time and it didn't get better since then.
I mean, think about this: I used 1280x1024 on a 20" screen back in the mid 90ies on (Unix!) machines that are insanely less powerful than even this X200s. The biggest difference: Now you can move windows around visually, back then you moved the outer frame of it to the new place and then it got redrawn. And the formatting options in browsers are better, i.e. it is easier to design the layout you want. Plus there is no need for palette changes when switching windows anymore ("true color"). The overall productivity hasn't kept up with the increase in computing power, though. Do you think a machine 100x the performance will give you 100x the productivity? With some exceptions, the weak link in the chain were, are, and will always be humans, and if there are delays, we are talking almost always about badly "optimized" software (aka bloat). That was an issue back then already and, unfortunately, it didn't get better.
I'm usually fairly careful with my things, so my gen8 hp elitebook still has all its bits together, but I've never really enjoyed using it. The screen, in particular, has ridiculous viewing angles, to the point it's impossible to not have any color cast on some region.
I considered upgrading but it’s hard to care to cause my M1 is just so good for what I need it for.
Are your laptops not lasting 10 years? (battery swaps are a must though)
The only reason I switched laptops was that I wanted to do AI Art and local LLMs.
I have so many old laptops and desktops that each of my 5 kids have their own. They are even playing half-modern games on them.
In case of Macbooks, it's the fact that they refuse to provide an official GPU driver for Linux and general poor support for things outside the walled garden. The Asahi stuff is cool and all, but come on, is a 3.4 trillion dollar company really going to just stand there and watch some volunteers struggling to provide support for their undocumented hardware without doing anything substantial to help? That sounds straight up insulting to me, especially for such a premium product.
For iphones, it's the fact that you are not allowed to run your own code on YOUR OWN DEVICE without paying the Apple troll toll and passing the honestly ridiculous Apple Store requirements.
And of course, in both cases, they actively sabotage third party repairs of their devices.
as someone who's been coding for more than 20 years, the happiest and the most depressed moments in my career both came during a hardware project I participated only for 4 months.
It's basically all emulated. One of the reasons GPU manufacturers are unwilling to open source their drivers is because a lot of their secret sauce actually happens in software in the drivers on top of the massively parallel CUDA-like compute architecture.
Calling it "all emulated" is very very far from the truth.
You can independently verify this by digging into open source graphics drivers.
I have signed NDAs and don't feel comfortable going into any detail, other than saying that there is a TON going on inside GPUs that is not "basically all emulated".
I don't know why you think there's anything resembling "good stories" (I don't even know what would constitute a good story - swash buckling adventures?). It's just grimy-ass runtime/driver/firmware code interfacing with hardware features/flaws.
EDIT: to be precise yes ofc every chip is a massively parallel array of compute units but CUDA has absolutely nothing to do with it and no not every company buries the functionality in the driver.
It is also true however that advances in APIs and HW desgins allowed for some parts that were troublesome at the time of GS not to be so troublesome anymore.
But mesh shaders are fairly new, will take a few years for the hardware and software to adapt.
AMD GPUs have them starting with RDNA 2.
Do they? I can't remember ever seeing any mention of Geometry Shader performance in a GPU review I've read/watched. The one thing I've ever heard about it was about how bad they were.
With Apple knowledge of internal documents they are the best positioned to produce an even better low level implementation.
At this point the main blockroad is the opinionated point that Metal porting is the only official supported way to go.
If Valve pull up a witch-crafted way to run AAA games on Mac without Apple support that would be an interesting landscape. And maybe would force Apple to re-consider their approach if they don't want to be cornered on their own platform...
https://youtu.be/pDsksRBLXPk?t=2895
The whole thing is worth watching to be honest, it's a privilege to watch someone share their deep knowledge and talent in such an engaging and approachable way.
In fairness to you I think a lot of the stuff involving hardware goes over everyone's heads :D
I've seen comments in a number of articles (and I think a few comments in this thread) saying that there are a few features in Vulcan/opengl/direct3d that were standardized ("standardized" in the D3D case?)/required that turned out to be really expensive to implement, hard to implement fast in hardware anyway, and not necessarily actually useful in practice. I think geometry shaders may have been one of those cases but I can't recall for sure.
> Where is it appropriate to post a subscriber link?
> Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared.
In order to subscribe people need to know that LWN exists.
FWIW an LWN subscription is pretty affordable and supports some of the best in-depth technical reporting about Linux and linux-related topics available.
(I am not affiliated with LWN, just a happy subscriber - I also credit some of my career success to the knowledge I've gained by reading their articles).
So n=1 it’s an effective advertising tactic even though I can read the specific article for free.
"I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Montreal for XDC."
The LWN paywall is unique in that all the content becomes freely available after a week. The subscriber links are there to encourage you to subscribe if you are in a position to do so.
> The following subscription-only content has been made available to you by an LWN subscriber.
I might be wrong but I read that as there being funding to make the previously paywalled content available, probably on an article-specific basis. Does anyone know?
> What are subscriber links
> A subscriber link is a mechanism by which LWN subscribers may grant free access to specific LWN articles to others. It takes the form of a special link which bypasses the subscription gate for that article.
Users here seem to not care about those "ethics"
Its a pretty absurd expectation.
...so far. The presenter is only 23 apparently. Maybe I'm speaking only for myself here, but I think career unhingedness does not go down over time as much as one might hope.
In all seriousness, she does really impressive work, so when she says this 2,000 lines of C++ is inscrutable, that gives one pause. Glad it's working nonetheless.
The original 2600+ lines of C++: https://gitlab.freedesktop.org/asahi/mesa/-/blob/main/src/ga...
The translated code: https://gitlab.freedesktop.org/asahi/mesa/-/blob/main/src/as...
A lot of the size is just because the code deals with a lot of 3/4 dim stuff and also some things are a bit more verbose in code but translate to something short in assembly
Have to say I do enjoy all the old school style whimsy with the witch costume and whatnot.
I tried googling, but trying to find the specific result I'm interested in amongst all the blog spam garbage related to powerpoint is beyond me. Even googles own AI couldn't help. Sad times!
Maybe marcan42 just wants to present themselves that way.
But he clearly doesn't want to be linked/equated to her for whatever reason, so I don't know why GP brought this up beyond stirring up drama.
It's truly stunning that anyone could do what she did, let alone a teenager (yes I know, she's not a teenager anymore, passage of time, etc :D)
oh my.
> Since this was going to be the first Linux Rust GPU kernel driver, I had a lot of work ahead! Not only did I have to write the driver itself, but I also had to write the Rust abstractions for the Linux DRM graphics subsystem. While Rust can directly call into C functions, doing that doesn’t have any of Rust’s safety guarantees. So in order to use C code safely from Rust, first you have to write wrappers that give you a safe Rust-like API. I ended up writing almost 1500 lines of code just for the abstractions, and coming up with a good and safe design took a lot of thinking and rewriting!
Also https://github.com/AsahiLinux/linux/blob/de1c5a8be/drivers/g... where "drm" is https://en.wikipedia.org/wiki/Direct_Rendering_Manager
That's incredibly arrogant. The whole industry is adopting ray tracing, and it is a very desired feature people are upgrading video cards to get working on games they play.
So I think calling it "a bit of a gimmick" is accurate for many of the games that shipped in, even if not all of them.
Replacing all that effort with raytracing and having one unified lighting system would be a _major_ time saver, and allows much more dynamic lighting than was previously possible. So yeah some current games don't look much better with RT, but the gameplay and art direction was designed without raytracing in mind in the first place, and had a _lot_ of work put into it to get those results.
Sure fully pathtraced graphics might not be 100% usable currently, but the fact that they're even 70% usable is amazing! And with another 3-5 years of algorithm development and hardware speedups, and developers and artists getting familiar with raytracing, we might start seeing games require raytracing.
Games typically take 4+ years to develop, so anything you're seeing coming out now was probably started when the best GPU you could buy for raytracing was an RTX 2080 TI.
Aside from Cyberpunk 2077 and a handful of ancient games with NVIDIA-sponsored remakes, what even offers fully path traced lighting as an option? The way it went for CP2077 makes your "70% usable" claim seem like quite an exaggeration: performance is only good if you have a current-generation GPU that cost at least $1k, the path tracing option didn't get added until years after the game originally shipped, and they had to fix a bunch of glitches resulting from the game world being built without path tracing in mind. We're clearly still years away from path tracing being broadly available among AAA games, let alone playable on any large portion of gaming PCs.
For the foreseeable future, games will still need to look good without fully path traced lighting.
That's why I said games aren't currently designed with only pathtracing in mind, but in 3-5 years with faster hardware and better algorithms, we'll probably start to see it be more widespread. That's typically how graphics usually develop; something that's only for high end GPUs eventually becomes accessible to everyone. SSAO used to be considered extremely demanding, and now it's accessible to even the weakest phone GPU with good enough quality.
Again the fact that it's feasible at all, even if it requires a $1000 GPU, is amazing! 5 years ago real time path tracing would've been seen as impossible.
> The way it went for CP2077 makes your "70% usable" claim seem like quite an exaggeration
Based on the raw frame timing numbers and temporal stability, I don't think it is. RT GI is currently usually around ~4ms, which is at the upper edge of usable. However the temporal stability is usually the bigger issue - at current ray counts, with current algorithms, either noise or slow response times is an inevitable tradeoff. Hence, 70% usable. But with another few years of improvements, we'll probably get to the point where we can get it down to ~2.5ms with the current stability, or 4ms and much more stable. Which would be perfectly usable.
Maybe you should be saying 70% feasible rather than 70% usable. And you seem to be very optimistic about what kind of improvements we can expect for affordable, low-power GPU hardware over a mere 3-5 years. I don't think algorithmic improvements to denoisers and upscalers can get us much further unless we're using very wrong image quality metrics to give their blurriness a passing grade. Two rays per pixel is simply never going to suffice.
Right now, an RTX 4090 ($1700+) runs at less than 50 fps at 2560x1440, unless you lie to yourself about resolution using an upscaler. So the best consumer GPU at the moment is about 70-80% of what's necessary to use path tracing and hit resolution and refresh rate targets typical of high-end gaming in 2012.
Having better-than-4090 performance trickle down to a more mainstream price point of $300-400 is going to take at least two more generations of GPU hardware improvements even with the most optimistic expectations for Moore's Law, and that's the minimum necessary to do path tracing well at a modest resolution on a game that will be approaching a decade old by then. It'll take another hardware generation for that level of performance to fit in the price and power budgets of consoles and laptops.
And in 7-10 years when the software stack is matured, we'll be thanking ourselves for doing this in hardware the right way. I don't understand why planning for the future is considered so wasteful - this is an architecture Apple can re-use for future hardware and scale to larger GPUs. Maybe it doesn't make sense for Macs today, but in 5 years that may no longer be the case. Now people don't have to throw away a perfectly good computer made in these twilight years of Moore's law.
For non-games applications like Blender or Cinema4D, having hardware-accelerated ray tracing and denoising is already a game-changer. Instead of switching between preview and render layers, you can interact with a production-quality render in real time. Materials are properly emissive and transmissive, PBR and normal maps composite naturally instead of needing different settings, and you can count the time it takes before getting an acceptable frame in milliseconds, not minutes.
I don't often give Apple the benefit of the doubt, but hardware-accelerated ray tracing is a no-brainer here. If they aren't going to abandon Metal, and they intend to maintain their minuscule foothold in PC gaming, they have to lay the groundwork for future titles to get developed on. They have the hardware investment, they have the capital to invest in their software, and their competitors like Khronos (apparently) and Microsoft both had ray tracing APIs for years when Apple finally released theirs.
So I guess it's just a "gimmick" in that relatively few games properly take advantage of this currently, rather than the effect not being good enough.
Unfortunately most people’s experience with raytracing is turning it on for a game that was not designed for it, but it was added through a patch, which results in worse lighting. Why? Because the rasterized image includes baked-in global illumination using more light sources than whatever was hastily put together for the raytracing patch.
WoW Burning Crusade launched in *2006* originally. The "Classic" re-release of the game uses the modern engine but with the original game art assets and content.
Does it do anything in the 'modern' WoW game? Probably! In Classic though all it did was tank my framerate.
Since then I also played the unimaginable disaster that was Cyberpunk 2077. For as "pretty" as I suppose the game looked I can't exactly say if the ray tracing improved anything
However, it is important to put things in context. Something can be a 'Gimmick' on several year old integrated mobile hardware that can't run most games at a reasonable FPS without it; and not a 'Gimmick' on cutting edge space heaters.
Personally I think it’s useful for a few things, but it’s not the giant game changer I think they want you to think it is.
Raytraced reflections are a very nice improvement. Using it for global illumination and shadows is also a very good improvement.
But it’s not exactly what the move to multi texturing was, or the first GPUs. Or shaders.
It's not "just math", it's data structures and algorithms for tree traversal, with a focus on memory cache hardware friendliness.
The math part is trivial. It's the memory part that's hard.
Ray tracing hardware and acceleration structures are highly specific to ray tracing and not really usable for other kinds of spatial queries. That said, ray tracing has applications outside of computer graphics. Medical imaging for example.
The other parts of ray tracing like shading and so on, are usually just done on the general compute.
The problem with doing so, though, and with GPU physics in general is that it's too high latency to incorporate effectively into a typical game loop. It'll work fine for things that don't impact the world, like particle simulations or hair/cloth physics, but for anything interactive the latency cost tends to kill it. That and also the GPU is usually the visual bottleneck anyway so having it spend power on stuff the half-idle CPU could do adequately isn't a good use of resources.
Ray tracing is, by comparison, the holy grail of all realtime lighting effects. Global illumination makes the current raster lighting techniques look primitive by comparison. It is not an exaggeration to say that realtime graphics research largely revolves around using hacks to imitate a fraction of ray tracing's power. They are piling on pipeline-after-pipeline for ambient occlusion, realtime reflections and shadowing, bloom and glare as well as god rays and screenspace/volumetric effects. These are all things you don't have to hack together when your scene is already path tracing the environment with the physical properties accounted for. Instead of stacking hacks, you have a coherent pipeline that can be denoised, antialiased, upscaled and post-processed in one pass. No shadowmaps, no baked lighting, all realtime.
There is a reason why even Apple quit dragging their feet here - modern real-time graphics are the gimmick, ray tracing is the production-quality alternative.
Path tracing is the gold standard for computer graphics. It is a physically based rendering model that is based on how lighting actually works. There are degrees of path tracing of varying quality, but there is nothing else that is better from a visual quality and accuracy standpoint.
Your modern AAA title does a massive amount of impressive hacks to get rasterization into the uncanny valley, as rasterization has nothing to do with how photons work. That can all be thrown out and replaced with “model photons interacting with the scene” if the path tracing hardware was powerful enough. It’d be simpler and perfectly accurate. The end result would not live in the uncanny valley, but would be indistinguishable from reality.
Assuming the hardware was fast enough. But we’ll get there.
Why waste computing power on 'physically correct' algorithms, when the 'cheap hacks' can do so much more in less time, while producing nearly the same result.