Death To Shading Languages

May 15, 2024

All shading languages suck.
Moreover, they’re an outdated concept and they’re failing us.


Shading languages as we know them descend from GLSL, which translated the existing concept of shading languages to the early, very limited first-generation programmable graphics hardware that was coming out at the time. GLSL was part of the now long-forgotten OpenGL 2.0 effort that 3DLabs was spearheading at the time.

The prior art at that time was the OpenGL ARB assembly language, which on top of being useless for writing programs directly in, actually isn’t anything like real assembly programming either 1. GLSL in contrast provided high-level syntax, mutable local variables, if and loop control-flow, first-order functions, and a useful standard library for all sorts of vector operations. Great!

But it’s 2024 today, and we’re still dealing with GLSL and it’s descendants. In many ways this is an unpleasant state of affairs:

1. Shading languages are stuck copying GLSL

Pick any shading language you like:

  • GLSL 4.60.x
  • HLSL 202x
  • Slang
  • WGSL
  • SPIR-V shaders2

In any of these, the essence of GLSL remains: you can see the impact of it’s decisions: control-flow is C-like. There are no pointers 3. There are no function calls. Chances are that language lowers pretty directly to GLSL. Chances are this was part of the design requirements4.

Oh sure, we got new features. We got sized integers. Templates! Templates but type-safe. We saw additions to support all sorts of new graphics pipelines and features. There is an enormous and somewhat concerning growing pile of stuff that we’ve bolted onto shading languages.

But none of these things really challenge the fundamentals, in no small part because making a clean break here would destroy the relative ease of translating between all these similarly restrictive languages. Thanks to the effort of certain industry stakeholders, there is no true one-size-fits-all portable option and we’re instead spending valuable human capital on solving fiddly transpiling issues, perpetuating the legacy limitations forever more.

In many ways what was forward-looking 20 years ago has become cemented as all that is and what can be: we still largely only have mutable local variables, if and loop control flow and first-order functions.

2. Shading languages are bad programming languages

Naturally what makes a good programming language is buried in a three-mile deep hellpit of strongly-held opinions so impossible to untangle that only a fool would try.

So, what makes a good programming language, or really any language, is how effective it is at expressing ideas. In the case of computer languages, we traditionally want to be rather specific about those ideas, and we also like the language to be helpful at detecting defects as soon as possible.

Dysfunctional abstractions

One could, in many ways, argue that programming or even computer science at large, is mostly about building abstractions: building useful, building correct, building maintainable and yes, building performant abstractions. These abstractions come in many forms, but the most fundamental one is the function: there is an entire paradigm of programming centered on building and composing functions.

Shading languages only support first-order functions: functions you can call by name. They’re not first-class: unlike in a functional language, functions aren’t values, you can’t pass them to other functions or store them as part of data structures.

There also are no function pointers: the only way to write a higher-order function such as Map is to instead write it as a macro or template, and specialize it. This comes with code size issues, but more fundamentally it stops you thinking about functions as your base building block, and instead requires you to think in terms of a zoo of ad-hoc constructs to build the abstraction you want, usually at the cost of type safety, and always at the cost of simplicity and readability.

Inadequate data structures

The lack of support for pointers is in my opinion, the smoking gun of the pattern of perpetuating GLSL design decisions long past the time they should have been reconsidered. On primitive early 2000s programmable GPUs, there was no notion of “global” or “shared” memory. Local variables most definitely lived only in hardware registers and there was definitely no stack to spill them to.

Programs were also much smaller in both scope and code size, by necessity and because being able to program the GPU at all was novel. That is to say: there is merit to the idea that back then, that we couldn’t or didn’t need to put pointers in GLSL.

The only problem, once again, is that a lot has changed in the intervening two decades. The hardware can all do it: OpenCL and CUDA have pointers and they run on all currently relevant GPU architectures, mobile and desktop.

There is also no doubt about their usefulness: pointers and their safe cousins5 are critical to building pretty much any data structure. There’s often a way to get around a lack of pointers by using arrays and indices instead, but this requires an additional layer of mental overhead, exactly what a good language should avoid.

Concept Zoos and Legacy Burdens

There is something pretty unfortunate to the accumulation of concepts and features in shading languages:

Due to their co-evolution with graphics APIs, they have to expose every single feature that gets added, and accumulated a bunch of warts. This is particularly problematic with things like binding models: for example GLSL has over half a dozen ways to pass data into the shader. In the compute world, you pass data to the shader by … passing arguments to the entry point.

This fantastic article covers the same issue in HLSL and coins the term “buffer zoo”. I believe graphics APIs and shading languages have an issue with “concept zoos” more broadly: Arrays of descriptor versus array images, “buffer textures”, inout parameters, the various weird memory layout rules, the distinction between uniform and buffer storage, the failed experiment of pipeline stages like geometry shaders, …

All these things deserve to be either nuked from orbit or at least hidden from view. Their occasional niche usefulness isn’t worth spending the considerable amount of mental bandwidth that gets dedicated to them. These issues are not worth polluting the minds of the vast majority of graphics programmers with.

3. Shading languages as DSLs

Fundamentally, shading languages are (1) domain-specific languages that (2) are C-like in syntax and semantics.

Domain specific languages are great, and they’re also inevitable. Writing vertex and fragment shaders is a task that is incredibly well-suited to having a DSL designed for it, and GLSL and it’s brethren have been very successful at showing that.

Beyond the conventional graphics pipeline, a number of things like vector and matrix math, opaque image types and subgroup intrinsics are uniquely relevant for GPU programming in general and having first-class support for these things is very important.

The sticking point for me is how the typical shading language is constructed. The ones cited early are all external domain-specific languages, meaning their grammar, syntax, semantics and type system are all bespoke and exist independently of other languages, such as what you will use in the CPU portion of your application.

Unfortunately that means these languages are duplicating effort to create and maintain entire compilers that are only useful for this one task. It also means that expanding the scope of shading languages means doing a lot of work to replicate and compete with existing general-purpose languages.

As the rest of this article made already very clear, I’m not impressed with the showing so far, but more fundamentally I don’t think we should!

Embedded DSLs are better than External DSLs

The alternative to external DSLs are embedded domain-specific languages: where you take a general-purpose language such as C++, Rust or Scala and you write a library of abstractions and intrinsics that model whatever the domain is and let the programmer express their ideas just as naturally.

The benefits of embedded DSLs are enormous: established general-purpose languages are stable and well-maintained, and many of them offer powerful enough language features that inventing custom syntax can be done without touching their parser or type system.

Nothing that shading languages do actually requires that, and this is demonstrated in practice by the existence of embedded shading languages at the fringes: Apple’s Metal Shading language is nothing else than a lightly modified variant of C++. Our own project Vcc is an attempt to do the same with Vulkan.

The really puzzling area to me here is how the idea failed to take hold in the graphics world, despite being universal in the compute one. Both CUDA and the various dialects of OpenCL are based on C or C++, with minimal modifications. The success of those approaches is not in question.

4. Shading languages maintain an outdated programming model

Beyond the mere fact that CUDA is an embedded DSL and GLSL is not, there is another fundamental difference: CUDA (and SYCL, and even OpenCL, to some extent) makes considerable effort to make GPU programming not a separate pursuit, but one part of writing heterogenous programs, single programs that execute on different types of processors but are nonetheless a single shared codebase.

This is diametrically opposed to how graphics APIs have framed their shaders: as tiny “islands of computation” that you either insert into some complex pre-defined pipeline, or just run on the “special” processor when you find you have a task that is worth the overhead of actually moving there. In realtime graphics, there is an immense both real and perceived engineering cost to doing anything on the GPU rather than programming it on the host.

On the other hand, game engines almost universally treat shaders more like art assets or at best, scripts. They’re not seen integral parts of the game codebase and don’t get to share APIs with it either, one can count themselves lucky if they have a simple way to share one common data structure definitions between host and device. Often the defects of the shading language are hacked around in ways that only exacerbate the ‘bespoken-ness’ of shaders, augmenting the syntax with some custom hand-rolled preprocessing to implement some ad-hoc specialization mechanism.

In graphics APIs, shaders and cpu code are like water and oil, and this fundamental design decision has not been challenged in twenty years of programmable shading. It’s part of the untold, un-challenged “culture” of graphics programming. I believe this maintains a number of blind spots in this field, and also a number of compounding inefficiencies.

Code Duplication Hell

Because of the treatment of shaders as self-contained units, there is no way to meaningfully share common code: where host programs can take advantage of shared libraries, graphics pipelines get compiled individually and cannot call into each other. Code duplication it inevitable and instead encouraged by the lack of function pointers and received knowledge insisting that everything be specialized, all the time, for maximal performance.

However this presents a massive scaling issue, one that is increasingly difficult to hide and ignore. Sooner or later something will need to be done about it, for else we will literally run out of VRAM to store all the shader program code.

The only shading language cited here that actually acknowledges the specialization hell is Slang. However it does not actually implement function pointers in Vulkan, instead merely providing yet another ad-hoc mechanism for abstractions that can compile to either übershaders or specialized variants, neither of which are considered optimal.

Retiring Shading Languages As A Concept

Shading languages, and GLSL in particular, thank you for your service.

You’ve carried us outside of the fixed-function stagnation into a new and exciting era for computer graphics, and successfully unlocked a new chapter of incredible visuals, high-performance rendering and even whole new artforms. Your contributions are real and significant, but I’m afraid that as a concept, you have outlived your usefulness.

You’re just programming languages now. Bad ones.

What you can do can be done just as well or better in general-purpose languages.

What you can’t do is increasingly ridiculous and causes untold problems.

Your very existence prevents us from thinking about CPU and GPU programming holistically.

It’s time for you to go. Goodbye.


  1. It was more of a common subset of shared capabilities. ARB Assembly 1.0 could not even do loops. ↩︎

  2. Yes, I know SPIR-V isn’t a shading language, it’s an IR. Unfortunately, apart from being syntax-agnostic, SPIR-V shaders inherited many of the issues we have to deal with in shading languages today. You can learn more about this by watching my Vulkanised 2024 talk. ↩︎

  3. inout parameters are not pointers, since they have copy semantics. In the one case where we did, mercifully, get real global memory pointers, it’s almost universally exposed using some awful bespoke syntax instead of the familiar */& grammar. ↩︎

  4. WGSL was designed to intentionally map back and forth to SPIR-V. It didn’t target GLSL directly but SPIR-V is itself very close to GLSL. ↩︎

  5. Rust references, C++ references, object references in most OO languages … it’s all pointers at the lower levels! ↩︎