Introduction to HTML Components

HTML Components (HTC), introduced in Internet Explorer 5.5, offers a powerful new way to author interactive Web pages. Using standard DHTML, JScript and CSS knowledge, you can define custom behaviors on elements using the “behavior” attribute. Let’s create a behavior for a simple kind of “image roll-over” effect. For instance, save the following as “roll.htc”:

<PUBLIC:ATTACH EVENT="onmouseover" ONEVENT="rollon()" />
<PUBLIC:ATTACH EVENT="onmouseout" ONEVENT="rollout()" />
<SCRIPT LANGUAGE="JScript">
tmpsrc = element.src;
function rollon() {
    element.src = tmpsrc + "_rollon.gif"
}
function rollout() {
    element.src = tmpsrc + ".gif";
}
rollout();
</SCRIPT>

This creates a simple HTML Component Behavior that swaps the image’s source when the user rolls over and rolls off of the mentioned image. You can “attach” such a behavior to any element using the CSS attribute, “behavior”.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<BODY>
<IMG STYLE="behavior: url(roll.htc)" SRC="logo">
</BODY>
</HTML>

The benefit of HTML Components is that we can apply them to any element through simple CSS selectors. For instance:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
<STYLE>
.RollImg {
  behavior: url(roll.htc);
}
</STYLE>
</HEAD>
<BODY>
<IMG CLASS="RollImg" SRC="logo">
<IMG CLASS="RollImg" SRC="home">
<IMG CLASS="RollImg" SRC="about">
<IMG CLASS="RollImg" SRC="contact">
</BODY>
</HTML>

This allows us to reuse them without having to copy/paste code. Wonderful! This is known as an Attached Behavior, since it is directly attached to an element. Once you’ve mastered these basic Attached Behaviors, we can move onto something a bit more fancy, Element Behaviors. With Element Behaviors, you can create custom element types and create custom programmable interfaces, allowing us to build a library of custom components, reusable between pages and projects. Like before, Element Behaviors consist of an HTML Component, but now we have to specify our component in <PUBLIC:COMPONENT>.

<PUBLIC:COMPONENT TAGNAME="ROLLIMG">
<PUBLIC:ATTACH EVENT="onmouseover" ONEVENT="rollon()" />
<PUBLIC:ATTACH EVENT="onmouseout" ONEVENT="rollout()" />
<PUBLIC:PROPERTY NAME="basesrc" />
</PUBLIC:COMPONENT>
<img id="imgtag" />
<SCRIPT>
img = document.all['imgtag'];
element.appendChild(img);
function rollon() {
    img.src = element.basesrc + "_rollon.gif";
}
function rollout() {
    img.src = element.basesrc + ".gif";
}
rollout();
</SCRIPT>

I’ll get to the implementation of ROLLIMG in a bit, but first, to use a custom element, we use the special <?IMPORT> tag which allows us to import a custom element into an XML namespace.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML XMLNS:CUSTOM>
<HEAD>
<?IMPORT NAMESPACE="CUSTOM" IMPLEMENTATION="RollImgComponent.htc">
</HEAD>
<BODY>
<CUSTOM:ROLLIMG BASESRC="logo">
<CUSTOM:ROLLIMG BASESRC="home">
<CUSTOM:ROLLIMG BASESRC="about">
<CUSTOM:ROLLIMG BASESRC="contact">
</BODY>
</HTML>

The ROLLIMG fully encapsulates the behavior, freeing the user of having to “know” what kind of element to use the Attached Behavior on! The implementation of the Custom Element Behavior might seem a bit complex, but it’s quite simple. When Internet Explorer parses a Custom Element, it synchronously creates a new HTML Component from this “template” and binds it to the instance. We also have two “magic global variables” here: “element” and “document”. Each instance of this HTML Component gets its own document, the children of which are reflowed to go inside the custom element. “element” refers to the custom element tag in the outer document which embeds the custom element. Additionally, since each custom element has its own document root, that means that it has its own script context, and its own set of global variables.

We can also set up properties as an API for the document author to use when they use our custom element.

Here, we use an img tag as a “template” of sorts, add it to our custom element’s document root.

After IE puts it together, the combined DOM sort of looks like this:

<CUSTOM:ROLLIMG BASESRC="logo">
    <IMG ID="imgtag" SRC="logo.gif">
</CUSTOM:ROLLIMG>

<CUSTOM:ROLLIMG BASESRC="home">
    <IMG ID="imgtag" SRC="home.gif">
</CUSTOM:ROLLIMG>

...

Unfortunately, this has one final flaw. Due to the natural cascading nature of CSS Stylesheets, such “implementation details” will leak through. For instance, if someone adds a <STYLE>IMG { background-color: red; }</STYLE>, this will affect our content. While this can sometimes be a good thing if you want to develop a styleable component, it often results in undesirable effects. Thankfully, Internet Explorer 5.5 adds a new feature, named “Viewlink”, which encapsulates not just the implementation of your HTML Component, but the document as well. “Viewlink” differs from a regular component in that instead of adding things as children of our element, we instead can provide a document fragment which the browser will “attach” to our custom element in a private, encapsulated manner. The simplest way to do this is to just use our HTML Component’s document root.

<PUBLIC:COMPONENT TAGNAME="ROLLIMG">
<PUBLIC:ATTACH EVENT="onmouseover" ONEVENT="rollon()" />
<PUBLIC:ATTACH EVENT="onmouseout" ONEVENT="rollout()" />
<PUBLIC:PROPERTY NAME="basesrc" />
</PUBLIC:COMPONENT>
<img id="imgtag" />
<SCRIPT>
defaults.viewLink = document;
var img = document.all['imgtag'];
function rollon() {
    img.src = element.basesrc + "_rollon.gif";
}
function rollout() {
    img.src = element.basesrc + ".gif";
}
rollout();
</SCRIPT>

Using the “defaults.viewLink” property, we can set our HTML Component’s private document fragment as our viewLink, rendering the children but without adding them as children of our element. Perfect encapsulation.

*cough* OK, obviously it’s 2017 and Internet Explorer 5.5 isn’t relevant anymore. But if you’re a Web developer, this should have given you some pause for thought. The modern Web Components pillars: Templates, Custom Elements, Shadow DOM, and Imports, were all features originally in IE5, released in 1999.

Now, it “looks outdated”: uppercase instead of lowercase tags, the “on”s everywhere in the event names, but that’s really just a slight change of accent. Shake off the initial feeling that it’s cruft, and the actual meat is all there, and it’s mostly the same. Sure, there’s magic XML tags instead of JavaScript APIs, and magic globals instead of callback functions, but that’s nothing more than a slight change of dialect. IE says tomato, Chrome says tomato.

Now, it’s likely you’ve never heard of HTML Components at all. And, perhaps shockingly, a quick search at the time of this article’s publishing shows nobody else does at all.

Why did IE5’s HTML Components never quite catch on? Despite what you might think, it’s not because of a lack of open standards. Reminder, a decent amount of the web API, today, started from Internet Explorer’s DHTML initiative. contenteditable, XMLHttpRequest, innerHTML were all carefully, meticulously reverse-engineered from Internet Explorer. Internet Explorer was the dominant platform for websites — practically nobody designed or even tested websites for Opera or Netscape. I can remember designing websites that used IE-specific features like DirectX filters to flip images horizontally, or the VML

And it’s not because of a lack of evangelism or documentation. Microsoft was trying to push DHTML and HTML Components hard. Despite the content being nearly 20 years old at this point, documentation on HTML Components and viewLink is surprisingly well-kept, with diagrams and images, sample links and all, archived without any broken links. Microsoft’s librarians deserve fantastic credit on that one.

For any browser or web developer, please go read the DHTML Dude columns. Take a look at the breadth of APIs available on display, and go look at some example components on display. Take a look at the persistence API, or dynamic expression properties. Besides the much-hyped-but-dated-in-retrospect XML data binding tech, it all seems relatively modern. Web fonts? IE4. CSS gradients? IE5.5. Vector graphics? VML (which, in my opinion, is a more sensible standard than SVG, but that’s for another day.)

So, again I ask, why did this never catch on? I’m sure there are a variety of complex factors, probably none of which are technical reasons. Despite our lists of “engineering best practices” and “blub paradoxes“, computer engineering has, and always will be, dominated by fads and marketing and corporate politics.

The more important question is a bigger one: Why am I the first one to point this out? Searching for “HTML Components” and “Viewlink” leads to very little discussion about them online, past roughly 2004. Microsoft surely must have been involved in the Web Components Working Group. Was this discussed at all?

Pop culture and fads pop in and fade out over the years. Just a few years ago, web communities were excited about Object.observe before React proved it unnecessary. Before node.js’s take on “isomorphic JavaScript” was solidified, heck, even before v8cgi / teajs, an early JavaScript-as-a-Server project, another bizarre web framework known as Aptana Jaxer was doing it in a much more direct way.

History is important. It’s easier to point and laugh and ignore outdated technology like Internet Explorer. But tech, so far, has an uncanny ability to keep repeating itself. How can we do a better job paying attention to things that happened before us, rather than assuming it was all bad?

New Xplain: Basic 2D Rasterization

Hi. I just published a new Xplain article about basic 2D rasterization. Since I left Endless and the Linux world behind, I haven’t felt as motivated to document the details of the X11 Window System, but I still feel very motivated to teach the basics and foundations of graphics and other systems. Xplain seems to be my place for interactive demo explanations, so on there it goes.

Take care.

Take care.

Today was my last day at Endless.

Most of you know me for my Linux, GNOME, and free software work. It might be shocking or surprising for you guys to know that I’m choosing, willingly!, to go on and be one of the nameless faces working on commercial software.

Facts:

  • At Endless, my last year was almost exclusively spent working on proprietary software. And I was happier.
  • I’m typing this in Visual Studio Code, running on Windows 10. I haven’t run any variant of Linux on my main desktop computer for almost 5 years.
  • I took a pay cut for the new position.

I’ll post about my experiences working on Linux and open-source software professionally soon. After that, this blog will die. I’ll still keep it up and running, but I won’t be posting any more.

Take care.

“DRI”

I spend a lot of time explaining the Linux Graphics Stack to various people online. One of the biggest things I’ve come across is that people have a hard time differentiating between certain acronyms like “DRI”, “DRM” and “KMS”, and where they fit in the Linux kernel, in Xorg, and in Wayland. We’re not the best at naming things, and sometimes we choose the wrong name. But still, let’s go over what these mean, and where they (should) be used.

You see, a long time ago, Linux developers had a bunch of shiny new GPUs and wanted to render 3D graphics on them. We already had an OpenGL implementation that could do software rendering, called mesa. We had some limited drivers that could do hardware rendering in the X server. We just needed to glue it all together: implement hardware support in Mesa, and then put the two together with some duct tape.

So a group of developers much, much older than I am started the “Direct Rendering Infrastructure” project, or “DRI” for short. This project would add functionality and glue it all together. So, the obvious choice when naming a piece of glue technology like this is to give it the name “DRI”, right?

Well, we ended up with a large number of unrelated things all effectively named “DRI”. It’s double the fun when new versions of these components come around, e.g. “DRI2” can either refer to a driver model inside mesa, or an extension to the X server.

Yikes. So let’s try to untangle this a bit. Code was added to primarily three places in the DRI project: the mesa OpenGL implementation, the Xorg server, and the Linux kernel. The code does these three things: In order to get graphics on-screen, mesa needs to allocate a buffer, tell the kernel to render into it, and then pass that buffer over to the X Server, which will then display that buffer on the screen.

The code that was added to the kernel was in the form of a module called the “Direct Rendering Manager” subsystem, or “DRM”. The “DRM” subsystem takes care of controlling the GPU hardware, since userspace does not have the permissions to poke at the raw driver directly. Userspace uses these kernel devices by opening them through a path in “/dev/dri”, like “/dev/dri/card0”. Unfortunately, through historical accident, the device nodes had “DRI” in them, but we cannot change it for backwards-compatibility reasons.

The code that was added to mesa, to allocate and then submit commands to render inside those buffers, was a new driver model. As mentioned, there are two versions of this mesa-internal driver model. The differences aren’t too important. If you’ve ever looked inside /usr/lib/dri/ to see /usr/lib/dri/i915_dri.so and such, this is the DRI that’s being named here. It’s telling you that these libraries are mesa drivers that support the DRI driver model.

The third bit, the code that was added to the X server, which was code to allocate, swap, and render to these buffers, is a protocol extension known as DRI. There are multiple versions of it: DRI1, DRI2 and DRI3. Basically, mesa uses these protocol extensions to supply its buffers to the X server so it can show them on screen when it wants to.

It can be extraordinarily confusing when both meanings of DRI are in a single piece of code, like can be found in mesa. Here, we see a piece of helper code for the DRI2 driver model API that helps implement a piece of the code to work with the DRI3 protocol extension, so we end up with both “DRI2” and “DRI3” in our code.

Additionally, to cut down on the shared amount of code between our X server and our mesa driver when dealing with buffer management, we implemented a simple userspace library to help us out, and we called it “libdrm”. It is mostly a set of wrappers around the kernel’s DRM API, but it can have more complex behavior for more complex kinds of buffer management.

The DRM kernel API also has another, separate API inside it, sometimes known as “DRM mode”, and sometimes known as “KMS”, in order to configure and control display controllers. Display controllers don’t render things, they just take a buffer and show it on an output like an HDMI TV or a laptop panel. Perhaps we should have given it a different name and split it out even further. But the DRM mode API is another name for the KMS API. There is some work ongoing to split out the KMS API from the generic DRM API, so that we have two separate devices nodes for them: “render nodes” and “KMS nodes”.

You can also sometimes see the word “DRM” used in other contexts in userspace APIs as well, usually referring to buffer sharing. As a simple example, in order to pass buffers between Wayland clients and Wayland compositors, the mesa implementation of this uses a secret internal Wayland protocol known as wl_drm. This protocol is eerily similar to DRI3, actually, which goes to show that sometimes we can’t decide on what something should be named ourselves.

Why I’m excited for Vulkan

I’ve stopped posting here because, in some sense, I felt I had to be professional. I have a lot of half-written drafts I never felt were good enough to publish. Since a lot of eyes were on me, I only posted when I felt I had something I was really proud to share. For anyone who has met me in real-life, you know I can talk a lot about a lot of things, and more than anything else, I’m excited to teach and share. I felt stifled by having a platform to say a lot, and only feeling I could say something really complete and polished, even though I have a lot I want to say.

So expect half-written thoughts on things from here on out, a lot more frequently. I’ll still try to keep it technical and interesting to my audience.

What’s Vulkan

In order to program GPUs, we have a few APIs: Direct3D and OpenGL are the most popular ones, currently. OpenGL has the advantage of being implemented independently by most vendors, and is generally platform-agnostic. The OpenGL API and specification is managed by the standards organization Khronos. Note that in closed environments, you can find many others. Apple has Metal for their own set of PVR-based GPUs. In the game console space, Sony had libgcm on the PS3, GNM on the PS4, and Nintendo has the GX API for the Gamecube and Wii, and GX2 for the Wii U. Since it wasn’t expected that GPUs were swappable by consumers like on the PC platform, these APIs were extremely low-level.

OpenGL was originally started back in the mid-80s as a library called Graphics Layer, or “GL”, for SGI’s internal use on their own hardware and systems. They then released it as a product, “IRIS GL”, allowing customers to render graphics on SGI workstations. As a strategic move by SGI, SGI allowed third-parties to implement the API and opened up the specifications, transferring it from “IRIS GL” to “OpenGL”.

In the 30+ years since GL was started, computing has grown a lot, and OpenGL’s model has grown outdated. Vulkan is the first attempt at a cross-platform, vendor-neutral low-level graphics API. Low-level APIs are similar to what has been seen in the console space for close to a decade, offering higher levels of performance, but instead of tying itself to a GPU vendor, it allows any vendor to implement it for its own hardware.

Dishonesty

People have already written a lot about why Vulkan is exciting. It has a lower overhead on the CPU, leading to much improved performance, especially on CPU-constrained platform like mobile. Instead of being a global implicit state machine, it’s very explicit, allowing for better multithreaded performance.

These are all true, and they’re all good things that people should be excited for. But I’m not going to write about any of these. Instead, I’m going to talk about a more important point which I don’t think has been written about much: the GPU vendor cannot cheat.

You see, there’s been an awkward development in high-level graphics APIs over the last few years. During the early 2000s, the two major GPU vendors, ATI and NVIDIA, effectively had an arms race. They noticed that certain programs and games were behaving “foolishly”.

The code for a game might look like this:


// Clear to black.
glClearColor(0x000000);
glClear();

// Start drawing triangles.
glBegin(GL_TRIANGLES);
glVertex3f(-1, -1, 0);
glVertex3f(-1, 1, 0);
glVertex3f( 1, 1, 0);
// ...
glEnd(GL_TRIANGLES);

(I’m writing in OpenGL, because that’s the API I know, but Direct3D mirrors a very similar API, and has a similar problem)

The vendors noticed that games were clearing the entire screen to black when they really didn’t need to. So they started figuring out whether the game “really” needed to clear the screen, by simply setting a flag that the game wanted a clear, and then not doing it if the triangles painted over it.

Vendors shipped these updated drivers which had better performance. In a perfect world, these tricks would simply improve performance. But competition is a nasty thing, and once one competitor starts playing dirty, you have to follow along to compete.

As another example, the driver vendors noticed that games uploaded textures they didn’t always use. So the drivers started to only upload textures when games actually drew them.

But uploading textures isn’t cheap. When a new texture first appears in a game, it would stall a little bit. And customers got mad at the game developers for having “unoptimized” games, when it was really the vendor’s fault for not implementing the API correctly! Gamers praised the driver vendor for making everything fast, without realizing that performance is a trade-off.

So game developers found another trick: they would draw rectangles with each texture once while the level loaded, to trick the driver into actually uploading the texture. This is the sort of “folklore knowledge” that tends to be passed around from game development company to game development company, that just sort of exists within the industry. This isn’t really documented anywhere, since it’s not a feature of the API, it’s just secret knowledge about how OpenGL really works in practice.

Bigger game developers know all of these, and they tend to have support contracts with the driver vendors who help them solve issues. I’ve heard several examples from game developers where they were told to draw 67 triangles at a time instead of 64 triangles. And that speeds up NVIDIA, but the magic number might be 62 on AMD. Most game engines that I know of, when using “OpenGL in practice”, actually have different paths depending on the OpenGL vendor in use.

I could go on. NVIDIA has broken Chromium because it patched out the “localtime” function. The Dolphin project has hit bugs because having an executable named “Dolphin.exe”. We were told by an NVIDIA employee that there was a similar internal testing tool that used the API wrong, and they simply patched it up themselves. A very popular post briefly touched on “how much game developers get wrong” from an NVIDIA-biased perspective, but having talked to these developers, they’re often told to remove such calls for performance, or because it causes strange behavior because of driver heuristics. It’s common industry knowledge that most drivers ship with hand-compiled or optimized forms of shaders used in popular games as well.

You might have heard of tricks like “AZDO”, or “approaching zero driver overhead”. Basically, since game developers were asking for a slimmer, simpler OpenGL, NVIDIA added a number of extensions to their driver to support more modern GPU usage. The general consensus across the industry was a resounding sigh.

A major issue in shipping GLSL shaders in games is that since there is no conformance test suite for GLSL, different drivers accept different variants of GLSL. For a simple example, see page 85 Glyphy slides for examples of complex shaders in action.

NVIDIA has cemented themselves as the “king of video games” simply by having the most tricks. Since game developers optimize for NVIDIA first, they have an entire empire built around being dishonest. The general impression among most gamers is that Intel and AMD drivers are written by buffoons who don’t know how to program their way out of a paper bag. OpenGL is hard to get right, and NVIDIA has millions of lines of code invested in that. The Dolphin Project even concludes that NVIDIA’s OpenGL implementation is the only one to really work.

How does one get out of that?

Honesty

In early 2013, AMD released the Mantle API, a cross-platform, low-overhead API to program GPUs. They then donated this specification to the Khronos OpenGL committee, and waited. At the same time, AMD worked with Microsoft engineers to design a low-overhead Direct3D 12 API, primarily for the next version of the Xbox, in response to Sony’s success with libgcm.

A year later, the “gl-next” effort was announced and started. The committee, composed of game developers and mobile vendors, quickly hacked through the specification, rounding off the corners. Everyone was excited, but more than anything else, game developers were happy to have a comfortable API that didn’t feel like they were wrestling with the driver. Mobile developers were happy that they had a model that mapped very well to their hardware.

Microsoft got word about gl-next, and quickly followed with Direct3D 12. Another year passed, and the gl-next API was renamed to “Vulkan”.

I have been told through the grape vine that NVIDIA was not very happy with this — they didn’t want to lose the millions they invested in their driver, and their marketing and technical edge, but they couldn’t go against momentum.

Pulling a political coup wasn’t easy — it was tried in the mid-2000s as “OpenGL 3.0”, but since there were less graphics vendors in the day, and since game developers were not allowed as Khronos members, NVIDIA was able to wield enough power to maintain the status quo.

Accountability

Those of you who have seen the Vulkan API (and there are plenty of details on the open web, even if the specs are currently behind NDA), you know that there isn’t any equivalent to glClear or similar. The designs of Vulkan are that you control a modern GPU from start to finish. You control all of these steps, you control what gets scheduled and when.

The games industry has had a term called “dev-to-triangle time” when describing API complexity and difficulty: take an experienced programmer, put him in a room with a brand new SDK he’s never used before, and wait until he gets a single triangle up on the screen. How long does it take?

I’ve always heard the PS2 as having two weeks to a month of dev-to-triangle time, but according to a recent Sony engineer, it was around 3 to 6 months (I think that’s exaggerated, personally). The PS2 made you wrestle with two vector coprocessors, the VU0 and VU1, the Graphics Synthesizer, which ran the equivalent of today’s pixel shaders, along with a dedicated floating-point unit. Getting an engine up on the PS2 required writing code for these four devices, and then writing a process to pass data from one to the other and plug them all together. It’s sort of like you’re writing a driver!

The upside, of course, was that once you put in this required effort, expanding the engine is fairly easy, and you have a fairly good understanding of how everything works and where the boundaries are.

Direct3D and OpenGL, once you wrestle out a few driver issues, consistently has one to two days. The downside, of course, is that complex actions require complex techniques like draw call batching and using atlases to prevent texture switches, or the more complex AZDO techniques detailed above. Some of these can involve a major restructure of engine code. So the subtleties of high-level APIs are only discovered late in development.

Vulkan chooses to opt for the PS2-like approach: game developers are in charge of building command buffers, submitting them to the GPU, waiting on fences, and swapping the front and back buffers and submitting them to the window system themselves.

This means that the driver layer is fairly thin. An ImgTec engineer mentioned that dev-to-triangle time on Vulkan was likely two weeks to a month.

But what you get in return is all that you got on the PS2, and in particular, you get something that hasn’t been possible on the PC so far: accountability. Since the layer is so thin, there’s no place for the driver vendor to cheat. The graphics performance of games is as much as what the developer puts into it. For once, the people gamers often blame — the game developer — will actually be at fault.

Xplain: Regional Geometry

*cough* *cough* Is this thing still on?

I don’t write much here anymore, partly because I don’t see it as a platform where I have much voice or volume, and also because the things I most want to write about don’t fit in this blog thematically.

But a few years ago when I first released Xplain, I promised everyone that when I updated my Xplain series, since it didn’t naturally have an RSS feed, I’d write something here instead. I have released a new article on Xplain, and as such, I’m here to fill up your feed reader with a link telling you to go look elsewhere.

I’m particularly happy with the way this article came out, and for those of you still watching this space, I’d really appreciate it if you read it. Thank you.

Xplain: Regional Geometry

Endless

Six months ago, I left Red Hat to join a small little company on the other side of the country to help them launch a product based on GNOME. I haven’t had much to say in that time, but rest assured, I’ve been very busy.

Today, it has all come real. The small team here has built something amazing. During the next 30 days, you can have the opportunity to own one. To help seed sales and build awareness, we’ve launched a Kickstarter for our product.

Endless

We have much more planned for release, including a site for developers, but we’re swamped with responding to the Kickstarter today. Our source code is available on GitHub.

If you have any questions, feel free to leave a comment, or contact us through Kickstarter. I’m one of the people responding to Kickstarter directly.

Thank you.

Why Package Managers are not my Ideal Software Distribution Mechanism

Those who have spoken to me know that I’m not a big fan of packages for shipping software. Once upon a time, I was wowed that I could simply emerge blender and have a full 3D modelling suite running in a few minutes, without the fuss of wizards or unchecking boxes seeing the README. But today, iOS and Android have redefined the app installation experience, and packages seem like a step backwards.

I’m not alone in this. If you’ve seen recent conversations about the systemd team’s proposal for shipping Linux software differently, they’re effects of the same lunchtime conversations and gripes on IRC.

My goal here is to explain the problems we’ve seen, map out some goals for a new solution to supercede packages, and open up an avenue for discussion about this.

As a user

Dealing with packages as a normal user can be really frustrating. Just last week I had the frustrating experience of trying to upgrade my system when Debian decided to stop in the middle, ask me a question about which sshd configuration file I wanted to keep out of the two. I left it like that and went to lunch, and when I got back I accidentally hit the power strip with my feet. After much cursing, I eventually had to reinstall the OS from scratch.

It should never be possible to completely hose your OS by turning it off during normal operation, and I should be able to upgrade my OS without having the computer ask me incomprehensible questions I don’t understand.

And on my Fedora laptop, I can’t upgrade my system because Blender used an older libjpeg than my system. It gave me some error about packages conflicting and then aborted. And today, as I’m writing this, I’m on an old, insecure Fedora installation because upgrading it takes too much manual effort.

Today’s package managers do not see the OS independently from the applications that make it up: all packages are just combined to create one giant filesystem tree. This scheme works great when you have a bunch of open-source apps you can rebuild at every ABI break, but it’s not great when trying to build a world-class OS.

It’s also partially because package installations aren’t reproducible. Installing package A and then package B does not guarantee the same filesystem tree as installing package B, then A.

Packages are effectively composed of three parts: metadata about the package (its name, version, dependencies, and plenty of other information), a bunch of files to place in the filesystem tree (known as the “payload”), and a set of scripts to run when installing, uninstalling and upgrading the package (known as the “triggers”). It’s because of these scripts that packages are dangerous.

It would be great if developers could ship their apps directly to users. But, unfortunately, packaging gets in the way. The typical way to do things is to package up the source code, and then let community members who are interested make their own package for their favorite “distribution”. Each distribution usually has its own package format, build system, different payloads and triggers, leading to a frustrating fragmentation problem for both users and developers.

The developers of Chromium, for instance, doesn’t allow any bugs to be reported for any builds but their official version, since they can’t be sure what patches the community has made. And in some ases, the community has patched a lot. (Side note: I find it personally disappointing that a great app, Chromium, isn’t shipped in Fedora because of disagreements in how their app is developed. Fedora should stand for freedom and choice for the user to use whatever apps they want, and not try to force their engineering practices on the world.)

As a developer

That said, packages are amazing when doing development. Want to read PNGs? apt-get install libpng-devel. Want a database? Instead of hunting around for SQLite binaries, just yum install 'pkg-config(sqlite3)'.

Paired with pkg-config, I think the usability and ease of use have made it quite possibly the most attractive development environment out there today. In fact, other projects like node’s npm, Ruby’s gems, and Python’s pip have stolen the idea of packages and made it their own. Even Microsoft has endorsed NuGet as the easiest way of developing great apps on top of
their .NET platform.

Development packages solve a lot of the typical problems. These libraries are directly uploaded by developers, and typically are installed per-project, not globally across the entire system, meaning I can have one app built against an older SQLite, and another building something more modern. Upgrading these packages don’t run arbitrary scripts as root, they just unpack new files in a certain location.

I’ve also doing a lot of recent development on my ThinkPad and my home computer, both being equipped with SSDs without a lot of disk space. While I happily welcome HP’s memristors to hit shelves and provide data storage in sizes and speeds better than today’s SSDs, I think it’s worth thinking about how to provide a great experience for those not as fortunate to waste another gig on duplicated libraries.

Working towards a solution

With all of this in mind, we can start working on a solution that solves all these problems and meets these goals. As such, you might have seen different things trickle out of the community here. The amazing Colin Walters was the first to actually do my former employeranything when he built OSTree, which allows fully atomic system upgrades. You can never get your system into a hosed state with it.

At Endless Mobile, we want to ship a great OS that upgrades automatically, without ever breaking if the power gets cut or if the user unplugs it from the wall. We’ve been using OSTree successfully in production, and we’ve never seen a failed upgrade in the wild. It would be great to see the same applied to applications.

As mentioned, we’ve also seen some work starting on the app experienced. Lennart Poettering started working on Sandboxed Applications for GNOME back in 2013, and work has steadily been progressing, both on building KDBus for sandboxed IPC, and a more concrete proposal for how this experience will look and fit together.

Reading closely, you might pick up that I, personally, am not entirely happy with this approach, since there’s no development packages, and a number of other minor technical criticisms, but I haven’t really talked about to Lennart or the rest of the team building that yet.

Disclaimer

I also know that this is controversial. Wars have been fought over package management systems and distributions, and it’s very offputting for someone who just wants to develop software for our platform and our OS.

Package managers aren’t magic, they’re a set of well-understood technical tools, with tradeoffs and limitations like every other system out there. I hope we can move past our differences, recognize issues in existing technology, and build something great together.

As always, these opinions are my own. I do not speak for anybody mentioned in this article, anybody else in the GNOME community, the opinion of GNOME in general, and I certainly don’t speak for either my current employer or my former employer.

Please feel free to express opinions in the comments, for or against, however strong, as I’m honestly trying to open an avenue of discussion. However, I will not tolerate comments that make personal attacks on anybody. My blog is not the place for that.

Xplain: Adding Transparency

The next article in my “Xplain” series is now complete and has been published: “Adding Transparency”. It’s an explanation of how exactly we added transparent windows to the X server, explaining the COMPOSITE X extension, along with other things like RENDER and TFP, together with live demos.

Any and all feedback welcome. I’m having a lot of fun doing these, and I recently got some downtime at work, so the next one might come even quicker than expected.

XNG: GIFs, but better, and also magical

It might seem like the GIF format is the best we’ll ever see in terms of simple animations. It’s a quite interesting format, but it doesn’t come without its downsides: quite old LZW-based compression, a limited color palette, and no support for using old image data in new locations.

Two competing specifications for animations were developed: APNG and MNG. The two camps have fought wildly and we’ve never gotten a resolution, and different browsers support different formats. So, for the widest range of compatibility, we have just been using GIF… until now.

I have developed a new image format which I’m calling “XNG”, which doesn’t have any of these restrictions, and has the possibility to support more complex features, and works in existing browsers today. It doesn’t require any new features like <canvas> or <video> or any JavaScript libraries at all. In fact, it works without any JavaScript enabled at all. I’ve tested it in both Firefox and Chrome, and it works quite well in either. Just embed it like any other image, e.g. <img src="myanimation.xng">.

It’s magic.

Have a few examples:

I’ve been looking for other examples as well. If you have any cool videos you’d like to see made into XNGs, write a comment and I’ll try to convert it. I wrote out all of these XNG files out by hand.

Over the next few days, I’ll talk a bit more about XNG. I hope all you hackers out there look into it and notice what I’m doing: I think there’s certainly a lot of unexplored ideas in what I’ve developed. We can push this envelope further.

EDIT: Yes, guys, I see all your comments. Sorry, I’ve been busy with other stuff, and haven’t gotten a chance to moderate all of them. I wasn’t ever able to reproduce the bug in Firefox about the image hanging, but Mario Klingemann found a neat trick to get Firefox to behave, and I’ve applied it to all three XNGs above.