Xplain: Regional Geometry

*cough* *cough* Is this thing still on?

I don’t write much here anymore, partly because I don’t see it as a platform where I have much voice or volume, and also because the things I most want to write about don’t fit in this blog thematically.

But a few years ago when I first released Xplain, I promised everyone that when I updated my Xplain series, since it didn’t naturally have an RSS feed, I’d write something here instead. I have released a new article on Xplain, and as such, I’m here to fill up your feed reader with a link telling you to go look elsewhere.

I’m particularly happy with the way this article came out, and for those of you still watching this space, I’d really appreciate it if you read it. Thank you.

Xplain: Regional Geometry


Six months ago, I left Red Hat to join a small little company on the other side of the country to help them launch a product based on GNOME. I haven’t had much to say in that time, but rest assured, I’ve been very busy.

Today, it has all come real. The small team here has built something amazing. During the next 30 days, you can have the opportunity to own one. To help seed sales and build awareness, we’ve launched a Kickstarter for our product.


We have much more planned for release, including a site for developers, but we’re swamped with responding to the Kickstarter today. Our source code is available on GitHub.

If you have any questions, feel free to leave a comment, or contact us through Kickstarter. I’m one of the people responding to Kickstarter directly.

Thank you.

Why Package Managers are not my Ideal Software Distribution Mechanism

Those who have spoken to me know that I’m not a big fan of packages for shipping software. Once upon a time, I was wowed that I could simply emerge blender and have a full 3D modelling suite running in a few minutes, without the fuss of wizards or unchecking boxes seeing the README. But today, iOS and Android have redefined the app installation experience, and packages seem like a step backwards.

I’m not alone in this. If you’ve seen recent conversations about the systemd team’s proposal for shipping Linux software differently, they’re effects of the same lunchtime conversations and gripes on IRC.

My goal here is to explain the problems we’ve seen, map out some goals for a new solution to supercede packages, and open up an avenue for discussion about this.

As a user

Dealing with packages as a normal user can be really frustrating. Just last week I had the frustrating experience of trying to upgrade my system when Debian decided to stop in the middle, ask me a question about which sshd configuration file I wanted to keep out of the two. I left it like that and went to lunch, and when I got back I accidentally hit the power strip with my feet. After much cursing, I eventually had to reinstall the OS from scratch.

It should never be possible to completely hose your OS by turning it off during normal operation, and I should be able to upgrade my OS without having the computer ask me incomprehensible questions I don’t understand.

And on my Fedora laptop, I can’t upgrade my system because Blender used an older libjpeg than my system. It gave me some error about packages conflicting and then aborted. And today, as I’m writing this, I’m on an old, insecure Fedora installation because upgrading it takes too much manual effort.

Today’s package managers do not see the OS independently from the applications that make it up: all packages are just combined to create one giant filesystem tree. This scheme works great when you have a bunch of open-source apps you can rebuild at every ABI break, but it’s not great when trying to build a world-class OS.

It’s also partially because package installations aren’t reproducible. Installing package A and then package B does not guarantee the same filesystem tree as installing package B, then A.

Packages are effectively composed of three parts: metadata about the package (its name, version, dependencies, and plenty of other information), a bunch of files to place in the filesystem tree (known as the “payload”), and a set of scripts to run when installing, uninstalling and upgrading the package (known as the “triggers”). It’s because of these scripts that packages are dangerous.

It would be great if developers could ship their apps directly to users. But, unfortunately, packaging gets in the way. The typical way to do things is to package up the source code, and then let community members who are interested make their own package for their favorite “distribution”. Each distribution usually has its own package format, build system, different payloads and triggers, leading to a frustrating fragmentation problem for both users and developers.

The developers of Chromium, for instance, doesn’t allow any bugs to be reported for any builds but their official version, since they can’t be sure what patches the community has made. And in some ases, the community has patched a lot. (Side note: I find it personally disappointing that a great app, Chromium, isn’t shipped in Fedora because of disagreements in how their app is developed. Fedora should stand for freedom and choice for the user to use whatever apps they want, and not try to force their engineering practices on the world.)

As a developer

That said, packages are amazing when doing development. Want to read PNGs? apt-get install libpng-devel. Want a database? Instead of hunting around for SQLite binaries, just yum install 'pkg-config(sqlite3)'.

Paired with pkg-config, I think the usability and ease of use have made it quite possibly the most attractive development environment out there today. In fact, other projects like node’s npm, Ruby’s gems, and Python’s pip have stolen the idea of packages and made it their own. Even Microsoft has endorsed NuGet as the easiest way of developing great apps on top of
their .NET platform.

Development packages solve a lot of the typical problems. These libraries are directly uploaded by developers, and typically are installed per-project, not globally across the entire system, meaning I can have one app built against an older SQLite, and another building something more modern. Upgrading these packages don’t run arbitrary scripts as root, they just unpack new files in a certain location.

I’ve also doing a lot of recent development on my ThinkPad and my home computer, both being equipped with SSDs without a lot of disk space. While I happily welcome HP’s memristors to hit shelves and provide data storage in sizes and speeds better than today’s SSDs, I think it’s worth thinking about how to provide a great experience for those not as fortunate to waste another gig on duplicated libraries.

Working towards a solution

With all of this in mind, we can start working on a solution that solves all these problems and meets these goals. As such, you might have seen different things trickle out of the community here. The amazing Colin Walters was the first to actually do my former employeranything when he built OSTree, which allows fully atomic system upgrades. You can never get your system into a hosed state with it.

At Endless Mobile, we want to ship a great OS that upgrades automatically, without ever breaking if the power gets cut or if the user unplugs it from the wall. We’ve been using OSTree successfully in production, and we’ve never seen a failed upgrade in the wild. It would be great to see the same applied to applications.

As mentioned, we’ve also seen some work starting on the app experienced. Lennart Poettering started working on Sandboxed Applications for GNOME back in 2013, and work has steadily been progressing, both on building KDBus for sandboxed IPC, and a more concrete proposal for how this experience will look and fit together.

Reading closely, you might pick up that I, personally, am not entirely happy with this approach, since there’s no development packages, and a number of other minor technical criticisms, but I haven’t really talked about to Lennart or the rest of the team building that yet.


I also know that this is controversial. Wars have been fought over package management systems and distributions, and it’s very offputting for someone who just wants to develop software for our platform and our OS.

Package managers aren’t magic, they’re a set of well-understood technical tools, with tradeoffs and limitations like every other system out there. I hope we can move past our differences, recognize issues in existing technology, and build something great together.

As always, these opinions are my own. I do not speak for anybody mentioned in this article, anybody else in the GNOME community, the opinion of GNOME in general, and I certainly don’t speak for either my current employer or my former employer.

Please feel free to express opinions in the comments, for or against, however strong, as I’m honestly trying to open an avenue of discussion. However, I will not tolerate comments that make personal attacks on anybody. My blog is not the place for that.

Xplain: Adding Transparency

The next article in my “Xplain” series is now complete and has been published: “Adding Transparency”. It’s an explanation of how exactly we added transparent windows to the X server, explaining the COMPOSITE X extension, along with other things like RENDER and TFP, together with live demos.

Any and all feedback welcome. I’m having a lot of fun doing these, and I recently got some downtime at work, so the next one might come even quicker than expected.

XNG: GIFs, but better, and also magical

It might seem like the GIF format is the best we’ll ever see in terms of simple animations. It’s a quite interesting format, but it doesn’t come without its downsides: quite old LZW-based compression, a limited color palette, and no support for using old image data in new locations.

Two competing specifications for animations were developed: APNG and MNG. The two camps have fought wildly and we’ve never gotten a resolution, and different browsers support different formats. So, for the widest range of compatibility, we have just been using GIF… until now.

I have developed a new image format which I’m calling “XNG”, which doesn’t have any of these restrictions, and has the possibility to support more complex features, and works in existing browsers today. It doesn’t require any new features like <canvas> or <video> or any JavaScript libraries at all. In fact, it works without any JavaScript enabled at all. I’ve tested it in both Firefox and Chrome, and it works quite well in either. Just embed it like any other image, e.g. <img src="myanimation.xng">.

It’s magic.

Have a few examples:

I’ve been looking for other examples as well. If you have any cool videos you’d like to see made into XNGs, write a comment and I’ll try to convert it. I wrote out all of these XNG files out by hand.

Over the next few days, I’ll talk a bit more about XNG. I hope all you hackers out there look into it and notice what I’m doing: I think there’s certainly a lot of unexplored ideas in what I’ve developed. We can push this envelope further.

EDIT: Yes, guys, I see all your comments. Sorry, I’ve been busy with other stuff, and haven’t gotten a chance to moderate all of them. I wasn’t ever able to reproduce the bug in Firefox about the image hanging, but Mario Klingemann found a neat trick to get Firefox to behave, and I’ve applied it to all three XNGs above.

Shellshock will happen again

As usual, I’m a month late, the big Bash bug known as Shellshock has come and gone, and the world was confused as to why this ever happened in the first place. It’s been fixed for a few weeks now. The questions have started: Why has nobody spotted this earlier? Can we can prevent it? Are the maintainers overworked and underfunded? Should we donate to the FSF? Should we switch to another shell by default? Can we ever trust bash again?

During the whole thing, there’s a big piece of evidence that I didn’t see anybody point out. And I think it helps answer all of these questions. So here it is.

I present to you the upstream git log for bash: http://git.savannah.gnu.org/cgit/bash.git/log/

Every programmer who has just clicked that link is now filled with disgust and disappointment.

It’s all crystal clear now: Nobody would have spotted this earlier. No, we can’t really prevent it. No, the maintainers aren’t overworked and underfunded. No, we shouldn’t donate to the FSF. Perhaps we should switch to another shell. No, we cannot trust bash. Not until a serious change in its management comes along.

For those of you who aren’t programmers, you might be staring at that page, not quite understanding what it all means. And that’s OK. Let me help explain it to you.

There’s a saying in the open-source development community: “With enough eyeballs, all bugs are shallow”. I don’t believe in it as strongly as I used to, but I think there’s some truth to it. It can be found in other disciplines as well: in science, it’s known as “peer-review”, where all papers and discoveries should be rigorously double-checked by peers to make sure you didn’t make any mistakes. In other sorts of writing, the person reviewing it is the editor. Basically, “have somebody else double-check your work”.

The issue with this, though, is that you need enough eyeballs to double-check your work. And while it was assumed before that all open-source software had enough eyeballs, that doesn’t seem to be the case. That said, you can certainly design and maintain a software project in certain ways to attract enough eyeballs. And unfortunately, bash isn’t doing these things.

First of all, you can see that there’s zero eyeballs on the code: one person, Chet Ramey, is writing the code, and nobody double-checks it. Because there’s only one developer, we might assume that there’s no big motivation to do code cleanups or try to make it accessible to anybody other than Chet, since nobody is working on it. And that’s true. This makes the eyeballs wander elsewhere.

But, in fact, that isn’t even true: Florian Weimer of the Red Hat Security Team has developed multiple fixes for the Shellshock bug, but his work was included in bash uncredited. Developers really need to be credited for their work. This makes the eyeballs wander elsewhere.

The code isn’t really that actively developed. Down the bottom of that page, we see dates and times that are from 2012. It seems like nobody actually cares about this code anymore, and nobody is really trying to fix bugs and make it modern. This makes the eyeballs wander elsewhere.

There are no detailed descriptions of what changed between versions, and Which commits in that log are ones that are serious and fixed CVEs, and which might just fix minor documentation bugs? It’s impossible to tell. This makes the eyeballs wander elsewhere.

And even with the corresponding code change, it can be difficult to tell whether a specific commit is an important security fix, a new feature, or a minor bug fix. There’s no explanation in the commit message for why this change was mode. or any sort of changelog, which makes it hard for people redistributing and patching bash to know what fixes are important, and which aren’t. The eyeballs will wander elsewhere.

In comparison, look at the commit log for the Linux kernel. There’s a large number of different people contributing, and all of them explain what changes they make and why they’re making them. To use a recent example (at the time of this writing), this NFS change describes in perfect detail why the change was made (for compatibility with Solaris hosts), and includes a link to a bug report with further information from debugging. As a result, even though bash is more commonly used and included in more things than the Linux kernel itself, the Linux kernel has more eyeballs and more developers.

So, what should we do? How should we fix this situation? I don’t really know. Moving to a new shell isn’t really a solution, and neither is a fork of bash. The best case scenario would be for bash to indeed change its development practices to be more like the Linux kernel, and adopt a thriving community of its own. I don’t have enough power or motivation to enact such a change. I can only hope that I can convince enough people of the right way to maintain a project.

Perhaps, Chet, if you’re out there, do you want to talk about it? This is a discussion we really should be having about the future of your project, and your responsibilities.

Hanging up the hat

Hello. It’s been quite a while. I’ve been meaning to post for a while, but I’ve been too busy trying to get GNOME 3.14 finished up, with Wayland all done for you. I also fixed the last stability issue in GNOME, and now both X11 and Wayland are stable as a rock. If you’ve ever had GNOME freeze up on you when switching windows or Alt-Tabbing, well, that’s fixed, and that was actually the same bug that was crashing Wayland. This was my big hesitation in shipping Wayland, and with that out of the way, I’m really happy. Please try out GNOME 3.13.90 on Wayland and let me know how it goes.

I promise to post a few more Xplain articles before the end of the year, and I have another blog post coming up about GPU rendering that you guys are going to enjoy. Promise. Even though, well…


I have a new job. Next Tuesday, the 26th, is my final day at Red Hat, and after that I’m going to be starting at Endless Mobile. Working at Red Hat has been a wonderful, life-changing experience, and it was a really hard decision to leave the incredible team that made GNOME what it is today. Thank you, each and every one of you. All of you are incredible, and I hope I keep working with you every single day. It would be an absolute shame if I wasn’t.

Endless Mobile is a fantastic new startup that is focused on shipping GNOME to real end users all across the world, and that’s too exciting of an opportunity to pass by. We, the GNOME community, can really improve the lives of people in developing countries. Let’s make it happen.

I’ll still be around on IRC, mailing lists, reddit, blogging, all the usual places. I’m not planning on leaving the GNOME community. If you have any questions at all, feel free to ask.



Wayland 1.5 is released. It’s a pretty exciting release, with plenty of features, but the most exciting thing about it is that we can begin work on Wayland 1.6!

… No, I’m serious. Wayland 1.6’s release schedule matches up pretty well with GNOME’s. Wayland 1.6 will be released in the coming weeks before GNOME 3.14, the first version of GNOME with full Wayland support out of the box.

Since development is opening again, we can resume work on xdg-shell, the new desktop shell protocol to replace wl_shell. I alongside Kristian Hoegsberg have been prototyping and implementing this in toolkits and Wayland compositors. We’re extremely happy with our current revision of the bare-bones protocol, so it’s at this point that we want to start evangelizing and outreaching to other communities to make sure that everybody can use it. We’ve been working closely with and taking input from the Wayland community. That means that we’ve been working with the Qt/KDE and Enlightenment/EFL Wayland teams, but anybody who isn’t paying close attention to the Wayland community is out of the loop. This needs to change.

Ironically, as the main Wayland developer for GNOME, I haven’t talked too much about the Wayland protocol. My only two posts on Wayland were a user post about the exciting new features, and one about the legacy X11 backwards compatibility mode, XWayland.

Let’s start with a crash course in Wayland protocols.


As odd as it sounds, Wayland doesn’t have a built-in way to get something like a desktop window system, with draggable, resizable windows. As a next-generation display server, Wayland’s protocol is meant to be a bit more generic than that. Wayland can already be found on mobile devices as part of SailfishOS through the hard work of Jolla and other companies. Engineers at Toyota and Jaguar/Land Rover use Wayland for media centers in cars, as part of a custom Linux distribution called GENIVI. I’m also told that LG’s webOS as used in its smart TVs are investigating Wayland for a display server as well. Dragging and resizing tiny windows from on a phone, or inside a car, or on a TV just isn’t going to be a great experience. Wayland was designed, from the start, to be flexible enough to support a wide variety of use cases.

However, that doesn’t mean that Wayland is all custom protocols: there’s a common denominator between all of these cases. Wayland has a core protocol object called a wl_surface on which clients can show some pixels for output, and retrieve various kinds of input on. This is similar to the concept of X11’s “windows”, which I explain in Xplain. However, the wl_surface isn’t simply a subregion of the overall front buffer. Instead of owning parts of the screen, Wayland clients instead create their own pixel buffers, draw to them, and then “attach” them to the wl_surface, causing a new pixel buffer to be displayed. The wl_surface concept is fairly versatile, and is used any time we need a “live surface” to play around with. For instance, the mouse cursor is done simply by providing the Wayland compositor with a wl_surface. The same thing is done for drag-and-drop icons as well.

An interesting aside is that the model taken by Wayland with wl_surface can actually require less copies and be more efficient than X11 with modern systems. More and more GPUs have more interesting and fancy hardware at scanout time. With the rise of low-power phones that require rich graphics, we’re seeing a resurgence in fixed-function alpha blending and compositing hardware when doing scanout, similar to what game consoles like the NES and SNES had (but they called them “sprites“). X11’s model of a giant front buffer that apps draw to means that we must copy all contents to the front buffer eventually from the CPU, while Wayland’s model means that applications can simply hand us their pixel buffers, and we can choose to show it as an overlay, which removes any copy. And if an application is full-screen, we can simply tell the GPU to scan out from that application’s buffer directly, instead of having to copy.


OK, so I’ve talked about wl_surface. How does this relate to xdg-shell? Since a wl_surface can be used for lots of different purposes, like cursors, simply creating the wl_surface and attaching a buffer doesn’t put it on the screen. Instead, first, we need to let the Wayland compositor know that this wl_surface is intended to be a desktop-style window that can be dragged and resized around. It should appear in Alt-Tab, and clicking on it should give it keyboard focus, etc.

Wayland’s approach here is a bit odd, but to give a wl_surface a role, we construct a new wrapper object which has all of our desktop-level protocol functions, and then hand it the wl_surface. In this case, the protocol that we use to create this role is known as “xdg-shell”, and the wrapper object is known as an “xdg_surface”. The name is a reference to the FreeDesktop Group, an open mailing list where cross-desktop standards are discussed between all the different desktops. For historical reasons, it’s abbreviated as “XDG”. Members from the XDG community have all been contributing to xdg-shell.

The approach of a low-level structure with a high-level role is actually fairly similar to the approach taken in X11. X11 simply provides a data structure called a “window”, as I explained in Xplain: a tool that you can use to construct your interface by pushing pixels here, and getting input there. An external process called a “window manager” turns this window from a simple region of the front buffer into a window with a title and icon that the user can move around, resize, minimize and maximize with keyboard shortcuts and a taskbar. The window manager and the client applications both agree to cooperate and follow a series of complex standards like the ICCCM and EWMH that allow you to provide this “role”. Though I’ve never actually worked on any environments other than traditional desktops, I’d imagine that in more special-case environments, different protocols are used instead, and the concept of the window manager is completely dropped.

X11 has no easy, simple way of creating protocol extensions. Something as simple as a new request or a new event requires a bunch of byte-marshalling code in client libraries, extra support code added in the server, in toolkits, and a new set of APIs. It’s a pain, trust me. Instead, X11 does provide a generic way to send events to clients, and a series of key/value pairs on windows called “properties”, so standards like these often use the generic mechanisms rather than building an actual new protocol, since the effort is better spent elsewhere. It’s an unfortunate way that X11 was developed.

Wayland makes it remarkably easy to create a new protocol extension involving new objects and custom methods. You write up a simple XML description of your protocol, and an automatic tool, wayland-scanner, generates server-side and client-side marshalling code for you. All that you need to do now is write the implementation side of things. On the client, that means creating the object and calling methods on it. Because it’s so easy to write custom extensions in Wayland, we haven’t even bothered creating a generic property or event mechanism. Things with a structure allow us a lot more stability and rigidity.


Long-time users or developers of Wayland might notice this sounds similar to an older protocol known as wl_shell or wl_shell_surface. The intuition is correct: xdg-shell is a direct replacement for wl_shell. wl_shell_surface had a number of frustrating limitations, and due to its inclusion in the Wayland 1.0 core, it is harder to change and make better. As Fred Brooks told us, “write one to throw away”.

xdg-shell can be seen as a replacement for wl_shell_surface, and it solves a number of fundamental issues and race conditions which I’d prefer not to go into here (but if you ask nicely in the comments, I might oblige!), but I guess you’ll have to trust me when I say that they were highly visible user bugs and frustrations about some weird lagginess or flickering when using Wayland. We’re happy that these are gone.

A call to arms

The last remaining ticket item I have to work in xdg-shell is related to “window geometry”, as a way of communicating where the user’s concept of the edge of the window is. This requires significant reworking of the code in weston, mutter, and GTK+. After that, it will serve the needs of GTK+ and GNOME perfectly.

Does it serve the needs of your desktop? The last thing we want to do is to solidify the xdg-shell protocol, only to find that a few months later, it doesn’t quite work right for tiling WMs or for EFL or KDE applications. We’re always experimenting with things like this to make sure everything can work, but really, we’re only so many people, and others testing it out and porting their toolkits and compositors over can’t ever hurt.

So, any and all help is appreicated! If you have any feedback on xdg-shell so far, or need any help understanding this, feel free to post about it to the Wayland mailing list or poke me on IRC (#wayland on Freenode, my nick in there is Jasper).

As always, if anybody has any questions, no matter how dumb or stupid, please let me know! Comments are open, and I always try to reply.

Xplain: Advanced Window Techniques

Hey! I promised I’d try to blog once a month, and it’s getting overdue now. I just went ahead and published a new article in the Xplain series: Advanced Window Techniques.

It contains information on the window tree, and its (initial) use in building complex systems. I wanted to write more about window managers and wrap up the window tree before moving onto other stuff, but this article was getting a bit long already, so I just cut it in two. I know a lot of people wanted to see the WM part of things, and I’m sorry. I’ll try and get the next article out by the end of June.

As before, I’d prefer all feedback to be emailed to me directly at my email, jstpierre@mecheye.net, but I’ll leave comments enabled this blog post as well. You guys had some excellent feedback on the first article, so a big thanks goes out to everyone who asked questions, sent in typo fixes, and told me they loved it! You guys rock!


Last week I wrote about Wayland in 3.12 and promised that I’d be writing again soon. I honestly didn’t expect it to be so soon!

But first, a quick notice. Some people let me know that they were having issues with running Wayland on top of F20 with the GNOME 3.12 COPR. I’ve been testing on rawhide, and since it worked fine for me, I thought the same would be true for the GNOME 3.12 COPR. It seems this isn’t the case. I tried last night to get a system to test with and failed. I’m going to continue to investigate, but I first have to get a system up and running to test with. That may take some time.

Sorry that this happened. I know about it, though, and I’ll get to the bottom of this one way or another. And hey, maybe it will be magically solved by…

A new Xwayland

Last night something very, very exciting happened. Xwayland landed in the X server. I’m super thrilled to see this land; I honestly thought it would be at least another year before we’d see it upstream. Keep in mind, it’s been the works for three years now.

So, why did it succeed so fast? To put it simply, Xwayland has been completely rearchitected to be leaner, cleaner, faster, and better than ever before. It’s not done yet; direct rendering (e.g. games using OpenGL) and by extension 2D acceleration aren’t supported yet, but it’s in the pipeline.

I also talked about this somewhat in the last blog post, and in The Linux Graphics Stack, but since it’s the result of a fairly recent development, let’s dive in.

The new architecture

Traditionally, in the Xorg stack, even within the FOSS graphics stack, there are a number of different moving parts.

The X server codebase is large, but it’s somewhat decently structured. It houses several different X servers for different purposes. The one you’re used to and is the one you log into is Xorg, and it’s in the hw/xfree86 directory. It’s named like that for legacy reasons. There’s also Xnest and Xephyr, which implement a nested testing environment. Then there’s the platform adaptions like hw/xwin and xquartz, which are Win32 and OS X servers which are designed to be seamless: the X11 windows that are popped up look and behave like any other window on your system.

There’s plenty of code that can be shared across all the different servers. If somebody presses a key on their keyboard, the code to calculate the key press event, do keysym translation, and then send it to the right application should be shared between all the different servers. And it is. This code is shared in a part of the source tree called Device-Independent X, or “DIX” for short. A lot of common functionality related to implementing the protocol are done in here.

The different servers, conversely, are named “Device-Dependent X”s, or “DDX”en, and that’s what the hw/ directory path means. They hook into the DIX layer by installing various function pointers in different structs, and exporting various public functions. The architecture isn’t 100% clean; there’s mistakes here and there, but for a codebase that’s over 30 years old, it’s fairly modern.

Since the Xorg server is what most users have been running on, it’s the biggest and most active DDX codebase by far. It has a large module system to have hardware-specific video and input drivers loaded into it. Input is a whole other topic, so let’s just talk about video drivers today. These video drivers have names like xf86-video-intel, and plug directly into Xorg in the same way: they install function pointers in various structs that override default functionality with something hardware-specific.

(Sometimes we call the xf86- drivers themselves the “DDX”en. Technically, these are the parts of the Xorg codebase that actually deal with device-dependent things. But really, the nomenclature is just there as a shorthand. Most of us work on the Xorg server, not in Xwin, so we say “DDX” instead of “xf86 video driver”, because we’re lazy. To be correct, though, the DDX is the server binary, e.g. Xorg, and its corresponding directory, e.g. hw/xfree86.)

What do these video drivers actually do? They have two main responsibilities: managing modesetting and doing accelerated rendering.

Modesetting is the responsibility of setting the buffer to display on every monitor. This is one of those things that you would think would be simple and standardized quite a long time ago, but for a few reasons that never happened. The only two standards here are the VESA BIOS Extensions, and its replacement, the UEFI Graphics Output Protocol. Unfortunately, both of these aren’t powerful enough for the features we need to build a competitive display server, like an event for when the monitor has vblanked, or flexible support for hardware overlays. Instead, we have a set of hardware-specific implementations in the kernel, along with a userspace API. This is known as “KMS”.

The first responsibility has now been killed. We can simply use KMS as a hardware-independent modesetting API. It isn’t perfect, of course, but it’s usable. This is what the xf86-video-modesetting driver does, for instance, and you can get a somewhat-credible X server and running that way.

So now we have a pixel buffer being displayed on the monitor. How do we get the pixels into the pixel buffer? While we could do this with software rendering with a library like pixman or cairo, it’s a lot better if we can use the GPU to its fullest extent.

Unfortunately, there’s no industry standard API for accelerated 2D graphics, and there likely never will be. There’s plenty of options: in the web stack we have Flash, CSS, SVG, VML, <canvas>, PostScript, PDF. On the desktop side we have GDI, Direct2D, Quartz 2D, cairo, Skia, AGG, and plenty more. The one attempt to have a hardware-accelerated 2D rendering standard is OpenVG, which ended in disaster. NVIDIA is pushing a more flexible approach that is integrated with 3D geometry better: NV_path_rendering.

Because of the lack of an industry standard, we created our own: the X RENDER extension. It supplies a set of high-level 2D rendering operations to applications. Video drivers often implement these with hardware fast paths. Whenever you hear talk about EXA, UXA or SNA, this is all they’re talking about: complex, sophisticated implementations of RENDER.

As we get newer and newer hardware up and running under Linux, and as CPUs are getting faster and faster, it’s getting less important to write a fast RENDER implementation in your custom video driver.

We also do have an industry standard for generic hardware-accelerated rendering: OpenGL. Wouldn’t it be nice if we could take the hardware-accelerated OpenGL stack we created, and use that to create a credible RENDER implementation? And that’s exactly what the glamor project is about: an accelerated RENDER implementation that works for any piece of hardware, simply by hoisting it on top of OpenGL.

So now, the two responsibilities of an X video driver have been moved to other places in the stack. Modesetting has been moved to the kernel with KMS. Accelerated 2D rendering has been pushed to use the 3D stack we already have in place anyway. Both of these are reusable components that don’t need custom drivers, and we can reuse in Wayland and Xwayland. And that’s exactly what we’re going to do.

So, let’s add another DDX to our list. We arrive at hw/xwayland. This acts like Xwin or Xquartz by proxying all windows through the Wayland protocol. It’s almost impressive how small the code is now. Seriously, compare that to hw/xfree86.

It’s also faster and leaner. A large part of Xorg is stuff related to display servers: reading events from raw devices, modesetting, VT switching and handling. The old Xwayland played tricks with the Xorg codebase to try and get it to stop doing those things. Now we have a simple, clean way to get it to stop doing those things: to never run the code in the first place!

In the old model, things like modesetting were done in the video driver, unfortunately. In the old model, we simply patched Xorg with a special magical mode to tell video drivers not to do anything too tricky. For instance, the xf86-video-intel driver had a special branch for Xwayland support. For generic hardware support, we wrote a generic, unaccelerated driver that stubbed out most of the functions we needed. With the new approach, we don’t need to patch anything at all.

Unfortunately, there are some gaps in this plan. James Jones from NVIDIA recently let us know they were expecting to use their video driver in Xwayland for backwards-compatibility with legacy applications. A few of us had a private chat afterwards about how we can move along with here. We’re still forming a plan, and I promise I’ll tell you guys about them when they’re more solidified.It’s exciting to hear that NVIDIA is on board!

And while I can’t imagine that custom xf86-video-* drivers are ever going to go away completely, I think it’s plausible that the xf86-video-modesetting video driver could add glamor support, and the rest of the FOSS DDXes die out in favor.

OK, so what does this mean for me as a user?

The new version of Xwayland is hardware-independent. In Fedora, we only built xf86-video-intel with Wayland support. While there was a generic video driver, xf86-video-wayland, we never built it in Fedora, and that meant that you couldn’t try out Wayland on non-Intel GPUs. This was a Fedora bug, not a fundamental issue with Wayland or a GNOME bug as I’ve seen some try to claim.

It is true, however, that we mostly test on Intel graphics. Most engineers I know of that develop GNOME run on Lenovo ThinkPads, and those tend to have Intel chips inside.

Now, this is all fixed, and Xwayland can work on all hardware regardless of which video drivers are built or installed. And for the record, the xf86-video-wayland driver is now considered legacy. We hope to ship these as updates in the F20 COPR, but we’re still working out the logistics of packaging all this.

I’m still working hard on Wayland support everywhere I go, and I’m not going to slow down. Questions, comments, everything welcome. I hope the next update can come as quickly!