Xwayland

Last week I wrote about Wayland in 3.12 and promised that I’d be writing again soon. I honestly didn’t expect it to be so soon!

But first, a quick notice. Some people let me know that they were having issues with running Wayland on top of F20 with the GNOME 3.12 COPR. I’ve been testing on rawhide, and since it worked fine for me, I thought the same would be true for the GNOME 3.12 COPR. It seems this isn’t the case. I tried last night to get a system to test with and failed. I’m going to continue to investigate, but I first have to get a system up and running to test with. That may take some time.

Sorry that this happened. I know about it, though, and I’ll get to the bottom of this one way or another. And hey, maybe it will be magically solved by…

A new Xwayland

Last night something very, very exciting happened. Xwayland landed in the X server. I’m super thrilled to see this land; I honestly thought it would be at least another year before we’d see it upstream. Keep in mind, it’s been the works for three years now.

So, why did it succeed so fast? To put it simply, Xwayland has been completely rearchitected to be leaner, cleaner, faster, and better than ever before. It’s not done yet; direct rendering (e.g. games using OpenGL) and by extension 2D acceleration aren’t supported yet, but it’s in the pipeline.

I also talked about this somewhat in the last blog post, and in The Linux Graphics Stack, but since it’s the result of a fairly recent development, let’s dive in.

The new architecture

Traditionally, in the Xorg stack, even within the FOSS graphics stack, there are a number of different moving parts.

The X server codebase is large, but it’s somewhat decently structured. It houses several different X servers for different purposes. The one you’re used to and is the one you log into is Xorg, and it’s in the hw/xfree86 directory. It’s named like that for legacy reasons. There’s also Xnest and Xephyr, which implement a nested testing environment. Then there’s the platform adaptions like hw/xwin and xquartz, which are Win32 and OS X servers which are designed to be seamless: the X11 windows that are popped up look and behave like any other window on your system.

There’s plenty of code that can be shared across all the different servers. If somebody presses a key on their keyboard, the code to calculate the key press event, do keysym translation, and then send it to the right application should be shared between all the different servers. And it is. This code is shared in a part of the source tree called Device-Independent X, or “DIX” for short. A lot of common functionality related to implementing the protocol are done in here.

The different servers, conversely, are named “Device-Dependent X”s, or “DDX”en, and that’s what the hw/ directory path means. They hook into the DIX layer by installing various function pointers in different structs, and exporting various public functions. The architecture isn’t 100% clean; there’s mistakes here and there, but for a codebase that’s over 30 years old, it’s fairly modern.

Since the Xorg server is what most users have been running on, it’s the biggest and most active DDX codebase by far. It has a large module system to have hardware-specific video and input drivers loaded into it. Input is a whole other topic, so let’s just talk about video drivers today. These video drivers have names like xf86-video-intel, and plug directly into Xorg in the same way: they install function pointers in various structs that override default functionality with something hardware-specific.

(Sometimes we call the xf86- drivers themselves the “DDX”en. Technically, these are the parts of the Xorg codebase that actually deal with device-dependent things. But really, the nomenclature is just there as a shorthand. Most of us work on the Xorg server, not in Xwin, so we say “DDX” instead of “xf86 video driver”, because we’re lazy. To be correct, though, the DDX is the server binary, e.g. Xorg, and its corresponding directory, e.g. hw/xfree86.)

What do these video drivers actually do? They have two main responsibilities: managing modesetting and doing accelerated rendering.

Modesetting is the responsibility of setting the buffer to display on every monitor. This is one of those things that you would think would be simple and standardized quite a long time ago, but for a few reasons that never happened. The only two standards here are the VESA BIOS Extensions, and its replacement, the UEFI Graphics Output Protocol. Unfortunately, both of these aren’t powerful enough for the features we need to build a competitive display server, like an event for when the monitor has vblanked, or flexible support for hardware overlays. Instead, we have a set of hardware-specific implementations in the kernel, along with a userspace API. This is known as “KMS”.

The first responsibility has now been killed. We can simply use KMS as a hardware-independent modesetting API. It isn’t perfect, of course, but it’s usable. This is what the xf86-video-modesetting driver does, for instance, and you can get a somewhat-credible X server and running that way.

So now we have a pixel buffer being displayed on the monitor. How do we get the pixels into the pixel buffer? While we could do this with software rendering with a library like pixman or cairo, it’s a lot better if we can use the GPU to its fullest extent.

Unfortunately, there’s no industry standard API for accelerated 2D graphics, and there likely never will be. There’s plenty of options: in the web stack we have Flash, CSS, SVG, VML, <canvas>, PostScript, PDF. On the desktop side we have GDI, Direct2D, Quartz 2D, cairo, Skia, AGG, and plenty more. The one attempt to have a hardware-accelerated 2D rendering standard is OpenVG, which ended in disaster. NVIDIA is pushing a more flexible approach that is integrated with 3D geometry better: NV_path_rendering.

Because of the lack of an industry standard, we created our own: the X RENDER extension. It supplies a set of high-level 2D rendering operations to applications. Video drivers often implement these with hardware fast paths. Whenever you hear talk about EXA, UXA or SNA, this is all they’re talking about: complex, sophisticated implementations of RENDER.

As we get newer and newer hardware up and running under Linux, and as CPUs are getting faster and faster, it’s getting less important to write a fast RENDER implementation in your custom video driver.

We also do have an industry standard for generic hardware-accelerated rendering: OpenGL. Wouldn’t it be nice if we could take the hardware-accelerated OpenGL stack we created, and use that to create a credible RENDER implementation? And that’s exactly what the glamor project is about: an accelerated RENDER implementation that works for any piece of hardware, simply by hoisting it on top of OpenGL.

So now, the two responsibilities of an X video driver have been moved to other places in the stack. Modesetting has been moved to the kernel with KMS. Accelerated 2D rendering has been pushed to use the 3D stack we already have in place anyway. Both of these are reusable components that don’t need custom drivers, and we can reuse in Wayland and Xwayland. And that’s exactly what we’re going to do.

So, let’s add another DDX to our list. We arrive at hw/xwayland. This acts like Xwin or Xquartz by proxying all windows through the Wayland protocol. It’s almost impressive how small the code is now. Seriously, compare that to hw/xfree86.

It’s also faster and leaner. A large part of Xorg is stuff related to display servers: reading events from raw devices, modesetting, VT switching and handling. The old Xwayland played tricks with the Xorg codebase to try and get it to stop doing those things. Now we have a simple, clean way to get it to stop doing those things: to never run the code in the first place!

In the old model, things like modesetting were done in the video driver, unfortunately. In the old model, we simply patched Xorg with a special magical mode to tell video drivers not to do anything too tricky. For instance, the xf86-video-intel driver had a special branch for Xwayland support. For generic hardware support, we wrote a generic, unaccelerated driver that stubbed out most of the functions we needed. With the new approach, we don’t need to patch anything at all.

Unfortunately, there are some gaps in this plan. James Jones from NVIDIA recently let us know they were expecting to use their video driver in Xwayland for backwards-compatibility with legacy applications. A few of us had a private chat afterwards about how we can move along with here. We’re still forming a plan, and I promise I’ll tell you guys about them when they’re more solidified.It’s exciting to hear that NVIDIA is on board!

And while I can’t imagine that custom xf86-video-* drivers are ever going to go away completely, I think it’s plausible that the xf86-video-modesetting video driver could add glamor support, and the rest of the FOSS DDXes die out in favor.

OK, so what does this mean for me as a user?

The new version of Xwayland is hardware-independent. In Fedora, we only built xf86-video-intel with Wayland support. While there was a generic video driver, xf86-video-wayland, we never built it in Fedora, and that meant that you couldn’t try out Wayland on non-Intel GPUs. This was a Fedora bug, not a fundamental issue with Wayland or a GNOME bug as I’ve seen some try to claim.

It is true, however, that we mostly test on Intel graphics. Most engineers I know of that develop GNOME run on Lenovo ThinkPads, and those tend to have Intel chips inside.

Now, this is all fixed, and Xwayland can work on all hardware regardless of which video drivers are built or installed. And for the record, the xf86-video-wayland driver is now considered legacy. We hope to ship these as updates in the F20 COPR, but we’re still working out the logistics of packaging all this.

I’m still working hard on Wayland support everywhere I go, and I’m not going to slow down. Questions, comments, everything welcome. I hope the next update can come as quickly!

49 thoughts on “Xwayland

  1. Hi, is there any progress on providing a universal mode setting API across open source and proprietary graphics driver?
    AFAIK, DRM/KMS API is exported only by open source drivers.
    how does proprietory graphics driver handle this?

    I’ve seen a XDC2013 video on EGLDevice by an Nvidia guy. he talked about its potential as a new mode-setting API.
    Could you share your opinion on the future evolution of Linux graphics driver model and its userspace API (for both open source and proprietary drivers)?

    • The guy who gave that presentation was James Jones from NVIDIA. I don’t particularly care what modesetting API we end up having to support, they’re all pretty much the same. They’re all equally weird or bad in some ways. KMS really isn’t applicable unless NVIDIA wants to use nouveau or an open kernel driver upstream. We’d be excited to see that, but it’s probably never going to happen. Additionally, KMS doesn’t support all the features they need to expose to their customers, like stereo support.

      I doubt we’ll ever see a standard API for modesetting. Maybe the EGLStreams stuff from NVIDIA will get implemented in mesa, who knows.

      • KMS does support stereo since a while now … at least we do for i915 and thanks for some common helper in the core adding support (at least if you don’t speparate framebuffers for left/rigth fields supported in hw) was trivial.

        • Ah. This was a few years ago when I heard through the grape vine why they couldn’t use KMS. It’s likely all their concerns have been fixed. Looking forward to nuclear modeset!

  2. The paragraph starting with” Unfortunately, there’s no industry standard API … ” is incomplete because Shitpress interpreted the html canvas tag you wrote for real and therefore hide the erst of that paragraph in it.

    • Thanks, fixed. I actually tell WordPress to use my raw HTML, and author in raw HTML, so I can’t blame it for interpreting the canvas tag as raw HTML.

  3. So correct me if I’m wrong, but from this article and the mailing list it seems that for example we won’t be using the intel SNA code path when running under XWayland? You would be using GLAMOR instead. Is that right? If so, do you expect the performance delta can/will be closed? As far as I know, even with the latest round of (awesome) optimizations, GLAMOR doesn’t even come close to beating SNA.

    Very much looking forward to more of your work (and Wayland and XWayland), thanks for the update!

    • Correct. While SNA has been claimed to be fast, it often renders the wrong thing. Fixing these bugs and edge cases has significantly docked its performance. SNA is also primarily written by one person, Chris Wilson, and the rest of the Intel Graphics team prefer to use glamor as well.

      Glamor isn’t even anywhere near fully optimized. There’s so much low-hanging fruit we can be improving here, and we have an impressive round of GSoC students who are going to work on that this summer.

      Keep in mind that it’s already outperforming the quite fast software fallbacks and UXA, which is what most distros already use.

      • I’ve been reading about the improvements to GLAMOR as well (I believe Keith Packard wrote about them but I might be mistaken). I was actually at about the low-hanging-ness of the fruit ;). SNA is working perfectly fine here for me (UXA didn’t cut it for my fleet of tiny boxes), with just one common glitch which has been fixed in git HEAD. But if GLAMOR comes near to it then I will gladly switch.

        I was just under the impression that it was a bit difficult for an OpenGL passthrough to equal a raw 2D driver because the last one has much more implicit information about what has to happen.

  4. What level of multi-pointer MPX extensions will Xwayland support?… And what level of multi-pointer support does wayland have?

    • Both Xwayland and Wayland support multiple seats. It’s unsupported in GNOME, though; our user interface relies is designed for one seat, and we’re not really open to changing that at the moment. We don’t know of many use cases for MPX. Do you have any in mind?

  5. I know this is completely unrelated but I couldn’t find generic “Gnome forums” so here we go. What hypervisor does the Gnome Boxes program use? Or is it pluggable. I know libvirt support every hypervisor in the galaxy, including Hyper-V and even non-hypervisors like OpenVZ and LXC!

    • libvirt is technically pluggable, but most of our development on a virt stack goes towards qemu/kvm. Devoting all our resources towards one strong virt product is much better than spreading thin across twelve.

  6. “And for the record, the xf86-video-wayland driver is now considered legacy.And for the record, the xf86-video-wayland driver is now considered legacy.”

    Should that line be xf86-video-intel driver?

  7. This is a superb write-up for those of us who aren’t intimately familiar with X. Thanks so much for your clear explanations and more importantly for your efforts to modernize the free software graphics stack!

  8. “And while I can’t imagine that custom xf86-video-* drivers are ever going to go away completely, I think it’s plausible that the xf86-video-modesetting video driver could add glamor support, and the rest of the FOSS DDXes die out in favor.”

    Is it difficult to add glamor support to xf86-video-modesetting? I guess a good starting point would be xf86-video-ati. Grep’ing for glamor in it doesn’t reveal much.

  9. Thanks a whole lot for this article. For many years the architecture of the Linux graphics stack has baffled me. Now I see what the whole DDX thing (all all the other in the TLA-soup) means.

  10. Thanks a lot for this article. I wasn’t aware the new XWayland is *that* nice – the deisng is really pleasing :) . Now I’m looking forward to using it even more :D

    I got one question though, if you permit: Regarding NVIDIA having trouble with this new design. Shouldn’t GLAMOR work on NVIDIA, once they support EGL? (And actually, they already do, so I’d expect that’s just a matter of adding the right window system or however these are called).
    Or is this about GLX applications running under XWayland? How does/will that part even work for the FLOSS drivers?

    • If NVIDIA wants to replace their custom RENDER implementation with Glamor in their Xorg video driver, they’re welcome to do that. It’s just a few calls to glamor_init / glamor_init_screen. It doesn’t require EGL at all, I don’t think.

      However, that won’t help Wayland integration. We still need two things from NVIDIA about Wayland: a way to do modesetting, and a way to bring up direct rendering without X. NVIDIA doesn’t give us either of those things right now, but they have said they’re working on it.

      Right now, Xwayland hardcodes DRI3 support, and doesn’t allow room for anything else. NVIDIA doesn’t have a DRI3 implementation, and they actually have stated they can’t use it as-is, as DRI3’s cleaner design breaks several customer applications. They haven’t told us which applications these are, or given us any testcases for this yet. The long-term plan is to fix DRI3 so that it’s compatible with these legacy customer applications.

  11. Pingback: XWayland интегрирован в основную кодовую базу X.Org | AllUNIX.ru — Всероссийский портал о UNIX-системах

  12. So may you sched some light in what will happen to older cards in the new Wayland world?.

    I understand that they are used in very old PCs but they are quite usable and a same to throw them away. What I have in mind are

    Chrome (VIA Technologies, Inc. K8M890CE/K8N890CE [Chrome 9] (rev 01))
    Geforce2 (early nvidia)
    ATI RAGE XL (8MB)

    Will wayland be able to utilize the existing xorg drivers to draw thing in these old displays? Or we will have to throw away our old infrastructure that doesn’t map to the modern feature set of nvidia, radeon, intel graphics cards?

    Thanks

    • Do you know if these cards have KMS support? Quite a lot have KMS support. If they do, then things should work just fine. If not, then nope, unless somebody ports the Xorg driver over to KMS, it ain’t happening.

      • I think they have KMS support or it is in the works or at least unofficial patches exist. So that’s good news because there is a chance for our older hardware…

        So wayland all it needs is KMS support? That’s not the impression I got by following the news and forums. There are a lot of discussion about OpenGL, EGL, GLAMOR which these cards with their rudimentary or non existing 3D would be in serious trouble to support.

        Thanks for answering, for the documentation and for the word you put into this. I am looking forward to read your next article.

        Vassilis

        • To be clear:

          In terms of outputting graphics, Wayland itself only really specifies how to pass buffers containing window contents between a compositor process, and application processes.

          As long as the compositor has a way to set the screen mode (KMS in the Open Source world), and a way to provide pixels to the hardware, it can do the rest in pure software. If you’d like Google terms, look up wl_shm (Wayland buffer passing using shared memory) and pixman (a fast software compositing backend).

          Note, of course, that pure software will be slower than using the hardware accelerator – however, on cards that old, the hardware accelerator isn’t guaranteed to accelerate the useful bits of X11, either.

        • Wayland does not specify rendering technology: that is left entirely to the client and the compositor. the Weston reference compositor, for instance, can use Pixman and CPU-side blending; other compositors can use or require GL or GLES.

          I’d like to point out that application toolkits are all moving towards a single drawing API, and that drawing API is GL. at some point, you’ll have to accept that older hardware simply cannot run newer software, if the minimal requirements have changed.

  13. Pingback: Five Things in Fedora This Week (2014-04-08) | Fedora Magazine

  14. I recently built the xserver 1.15.99.902 with xwayland ddx and the mutter which support this new ddx from git on a vmware virtual machine. Now the gnome shell works on wayland but It seems to fall back to software rendering mode (“Graphics” in “system details” shows that it was “Gallium 0.4 on llvmpipe”).

    Does the mainlined xwayland support accelerating with hardware or if the problem lies somewhere else in the graphics stack?

    • That’s strange. We disable the GLX extension in Xwayland for now, so the renderer string should be empty. Are you sure you’re logging into a Wayland session and not an X11 session?

      We don’t even have software rendering working on regular Wayland yet.

      • I launched the session with `gnome-session –session=gnome-wayland’ and I’m pretty sure it is a wayland session because I can launch applications with GDK_BACKEND=wayland set. (here’s the screen shot: http://postimg.org/image/9vkf5iejp/)

        Now with the new build the original Xorg stack is broken and I can’t confirm what that string is under a X session…

  15. Pingback: Cerveza Gratis 06 – Patontos - Cerveza Gratis

  16. ” Both Xwayland and Wayland support multiple seats. It’s unsupported in GNOME, though; our user interface relies is designed for one seat, and we’re not really open to changing that at the moment. We don’t know of many use cases for MPX. Do you have any in mind? ”

    E_v_e_r_y s_i_n_g_l_e t_i_m_e I sit at the computer, together with another person, a coworker or a friend, or especially a child, to have a “mini-conference”, to discuss and explain something, with the computer screen acting as the “white board”, and the mouse acting as “pointer”, and the keyboard allowing input for revisions … then passing the keyboard and mouse back and forth is very annoying. And then, plugging-in another mouse and keyboard, and having the inputs “fight” with each other – which is what happens when two people are expressing different points of view in a discussion – is also annoying.

    So, GNOME not supporting MPX – kind of a let-down. This is a “chicken and egg” issue. It is not a common “use case” perhaps largely because you simply can’t do it, have two people input to the same screen at the same time. Take a chance. Think Bigger – not just about the software, but the way it is used. “Computing” need not be a solitary activity; it can be a social context – _if_ you support that.

    James

    • What sort of behavior would you like to see in that case? Apps like word-processors getting two cursors?

      • That’s a very fair question because it illuminates how “different” or odd this use case would be, because the applications we have are not really designed for simultaneous multiple input streams. But yes, if the word-processor supported two simultaneous input streams, for instance, two keyboards editing two different areas of text, that would be an example.

        And that is why I would envision more of a “white board” example, since I imagine this “white board” would be _intended_ to support multiple input streams, as for instance, two people drawing simultaneously. Unfortunately, I’m at a loss for a working example.

        The simplest practical implementation I could imagine would be two separate application windows. So, say you are explaining algebra concepts to a high-school student. You could have two graphing applications running simultaneously, say two KAlgebra windows. Your student can by exploring equations to plot in one window, while you are showing examples in another window. Of course, there might be a third window open, a web browser window, describing some concept of algebra. And here, your student might move the mouse to scroll through this article, while, at the same time, you are still entering some equation in your plotting window.

        Now, you might suppose that you and your student might instead sidle-up next to each other with two separate laptop computers, with two different display screens. But then, you could not move your cursor to _their_ KAlgebra equation window to make some simple correction. If your student was simply not comprehending your words “No – like this…” you would have to move over to _their_ keyboard, push them out of the way, and type away. That would be awkward and socially inappropriate, being a kind of subtle discounting of the student.

        A lot of this potential “multi-seat” support would be about addressing new social interactions.

        Of course, you could imagine similar interaction scenarios if the topic were literature instead of mathematics, perhaps using two word processor windows and a browser window showing some literary work. Or two people working together on some molecular modeling, where it might be easier to simply type a chemical expression than to describe an idea verbally.

        The idea here would be, “How do you create a real-time in-person collaboration environment?” – using your computer, and shy of access to a holodeck. That might have been a silly question with only a 13 inch display screen at hand and no multi-seat support, but a 27 inch screen and multi-seat support makes this idea interesting and practical, I think.

        James

  17. Hi.

    I have some questions about Wayland.

    1. Is Wayland going to support other *nix systems than Linux like BSDs (Free-, Net-, OpenBSD)?
    2. Will Wayland support SVGA mode on older graphic cards (I will use it on old PC such as Pentium MMX/Pentium II-class)?
    3. Is it possible in Wayland to use remote desktop and single graphic applications via VNC or SSH (with -X function or similiar)?

    I’ll be glad if you answer to my questions!

    Evilus

    • I will not do any portability work to the BSDs myself (I’m not familiar enough with BSD them, and my job is working on Linux full-time) but I’m told that OpenBSD and FreeBSD community members are working on implementing DRM and KMS. There’s nothing in it that’s inherently unportable, it’s just that somebody has to do the porting work.

      mutter only supports KMS. It heavily depends on whether the device has a KMS driver, or if it can use simplefb. You’d have to give me more information about the graphics card and driver currently in use on the system.

      Yes, there’s been prototypes of a remote mode. I can’t promise anything as to whether GNOME will fully support it, but I am working on integration prototypes myself, actually.

Leave a Reply

Your email address will not be published. Required fields are marked *