Thursday, January 24, 2013

How to move a cursor

So you thought moving a pointer/cursor on-screen is simple? Well... no.

Having recently spent a day fixing up freedesktop Bug 31636, I figured maybe I should take you on a journey that starts with naïveté and ends in insanity. Just like Twilight.

What is actually needed to move a pointer on the screen? Move your mouse. The driver submits two relative coordinates to the server and expects the pointer to move. So we take the coordinates, add it to the last known coordinates and we're done.

A 1:1 movement of course only works for slow mouse movements. For larger deltas, we need to accelerate the pointer movement so we can easily cover larger distances on the screen. So we look at the timestamps of the events, their delta movements and use that to calculate some factor. This factor is applied to the latest deltas and then decides the actual movement.

Now we've pretty much covered traditional mice. Many devices however are absolute input devices. They don't use delta coordinates, they just give us the absolute position of the pointer. However, that position is not in pixels but some device coordinate system. So we need to remember the axis ranges and scale from the device coordinate system into the screen coordinate system.

Many users have more than one screen. Two or more screens, when not in mirrored mode, create a desktop that is larger than each single screen. For absolute devices, we map the device to the desktop coordinates so that each edge of the device maps to the corresponding edge on the desktop. Absolute events from such a device must be first mapped to desktop coordinates. From those coordinates we can gather the screen the pointer is to be on and clip the coordinates back to the per-screen coordinates to draw the visible cursor. For relative devices the process is somewhat similar, we add movement to the desktop coordinates, then clip back to per-screen for updates.

All of the above is pretty standard and doesn't require any X specifics. Let's get into what the X11 protocol requires.

evdev has a calibration feature that allows a device to be adjusted for differences in the actual vs. announced coordinates. This is needed when a device real axis ranges are actually different to what the device announces. For example, a device may claim that the axis starts at 0, but really the first value you get out of it is e.g. 50. For historical reasons we cannot change the device axes once they are set up though. So evdev's calibration scales from the calibrated device range (e.g. 50-950) into the actual announced device range (0-1000). That scaled coordinate is then posted to the server. The wacom driver has a similar feature (called Area).

The X Input Extensions (XI) provides so-called "valuators" (== axes) to the clients as part of the various input events. Valuators 0 and 1 are x and y. XI requires valuator data to be in absolute device coordinates, but those are per protocol screen. In old-style multi-monitor setups with two Section Device entries in the xorg.conf, you have more than one protocol screen. The device itself however is still mapped to the whole desktop. So we convert device coordinates to desktop coordinates, then to screen coordinates on the current screen, and then that position is converted back into device coordinates. Bonus points for considering what happens in a setup with three monitors but only two protocol screens

If you kept counting, you should be up to 5 coordinate systems now:

  1. device coordinate system
  2. adjusted device coordinate system after calibration is applied
  3. desktop-wide coordinate system
  4. per-screen coordinate system
  5. per-screen device coordinate system
Yep, that's right. A coordinate from an absolute input device passes through all 5 before the pointer position is defined and the data can be appended to the event. And that happens on every single pointer event. Compare this to a relative event, which has four steps:
  1. relative coordinates
  2. device-specific acceleration
  3. desktop-wide coordinate system
  4. per-screen coordinate system
The bug that triggered this blog post was an actual use-case. If an absolute device is used in relative mode, the coordinates were still applied according to the device coordinate range. Thus, relative motion on the device was dependent on the desktop dimensions and attaching a second monitor would increase movement in one axes but not the other. To avoid this, we have an extra layer of scaling, where we pre-scale the coordinates first. That scaling is then undone by the second conversion into desktop coordinates. Whoopee.

Proposed XI2.3 addition: XIGetSupportedVersion

Update March 7 2013: This addition was not merged into XI 2.3, largely because there is no real need for it. XI 1.x' XGetExtensionVersion() returns the server version without locking in a client version and at this point there was no perceived need for getting the already-requested client version back. I'll leave this here for archival purposes but again, this request was not merged into XI 2.3

Original post below

Posting this here too to get a bit more exposure.

XIQueryVersion(3) is the first XI2 request clients should send to the server. The client announces its supported version and in return receives the server version (which is always less or equal to the client, never higher).

As XI 2.1 - 2.3 progressed, we started using this information in the server. Clients are treated slightly differently depending on their announced version. The current differences are:

  • XIQueryPointer will not set the button 1 mask for pointer-emulated events if the client supports XI 2.2 or newer.
  • XIAllowEvents will allow XIRejectTouch and XIAcceptTouch for clients supporting XI 2.2 or newer.
  • Raw event delivery changes if a client supports XI 2.1 or newer.
The client can issue multiple XIQueryVersion requests, but they need to have the same version numbers to provide for consistent server behaviour.

So far, so good. This works fine as long as the client supports one specific version. However, as toolkits like GTK have come to support XI2, the requirements changed a bit. An application and its toolkit usually look like a single client to the server. However, the client may support XI 2.0, but the toolkit may support XI 2.3. And neither knows of the other's version support. If the client requests XIQueryVersion before the toolkit, the toolkit is locked into the client version. But if the toolkit first requests XIQueryVersion, the client is locked into the version supported by the toolkit. Worst case the client may get a BadValue and quit because it may not be built for this case.

Jasper St. Pierre and Owen Taylor brought this up on #xorg-devel today, and I've send a proposed solution to the mailing list.

A new XIGetSupportedVersion request simply returns the server's major/minor version number. Uncapped, so really what the server supports. And the same request also returns the client version previously announced with XIQueryVersion. Or zero, if the client hasn't called it yet.

This request enables toolkits to query what the client has already set, and of course what the server supports without modifying the client state. The request is currently an RFC, but I do hope we may get this into XI 2.3.

If you're working on XI2-aware clients or toolkits and you have a use-case that requires this or would break by this addition, please speak up now.

Wednesday, January 2, 2013

Getting rid of the GNOME "Oh No! Something has gone wrong." dialog

In some error cases, GNOME will display a full-screen window with only a single button. The window claims that "Oh no! Something has gone wrong." and "A problem has occurred and the system can't recover. Please log out and try again." The button merely allows a user to log-out and thus quit the current session. Killing that window with xkill also quits the session.

Most of the crashes I get is from experimental code crashing gnome-settings-daemon. Certainly not something fatal, certainly not something that should prevent me from continuing to work in my current session. After all, the menu key still works, the hot corner works, everything works, but closing the dialog will throw me out of my session. And because that pesky dialog is always on-top, I'm down to one monitor. Luckily, the dialog can be disabled.

Update Jan 3: As Jasper points out in the comments, Alt+F4 will close the window. Though I tried Ctrl+W and Ctrl+Q, I haven't used Alt+F4 in ages. Sometimes the right solution is so much simpler :)

The dialog is displayed by gnome-session and it's named the fail whale (code). It's triggered only for required apps and those can be configured.

$ cat /usr/share/gnome-session/sessions/gnome.session | grep Required
RequiredComponents=gnome-shell;gnome-settings-daemon;
Drop g-s-d from the required components, restart the session, and you won't see the error anymore. Try it by sending SIGABRT to g-s-d.
$ kill -ABRT `pidof gnome-settings-daemon`
Doing so twice (g-s-d restarts once) will trigger the error unless g-s-d is dropped from the required components.

It should go without saying, but the above will only display the error message, it won't fix the actual error that causes the message to be displayed.