On Thu, 25 Feb 2016, Mouse wrote:
X.org has gone
modular at some point and that does help -- compared
to monolithic X servers as they used to be -- with computers which
are not the richest in resources, [...]
I don't quite see how; I'd rather have a non-modular server for my
hardware than a modular server plus modules for my hardware. I really
dislike the current trend to dynamically loading everything in sight;
Agreed in principle, however IIUC X.org only loads what is explicitly
called for in xorg.conf rather than everything in sight. So this should
be relatively sound.
Also traditional SVR4 MIPS ELF binaries do not benefit from static
linking performance-wise. The MIPS psABI has been defined such, that
code produced for executables is PIC and no different from code for shared
libraries. All function calls and static data references are indirect,
even these which end up local to the executable.
There has been non-PIC ELF support defined and implemented for the MIPS
target a few years ago, making executables use PLT and copy relocations
for function calls and static data references respectively in 32-bit
binaries. I don't think I'm going to switch though, for one to make sure
old SVR4 support doesn't bit rot in the tools; besides, 64-bit (n64)
binaries continue using the original SVR4 ABI.
it is a security disaster waiting to happen and it is
bloat. And the
current trend to expect the X server to run as root, even on hardware
without the peecee video disaster to excuse it, is just insane. (For
example, on SPARCs I found the server trying to do its own sbus
enumeration, something it has no business going anywhere near,
presumably because someone thought it was a sane way to "port" code
that did PCI enumeration.) On hardware where it works, I still use
X11R6.4p3.
Running as root is unfortunate, however I think the framebuffer server
used with DEC hardware doesn't require it; not at least it should. It
uses /dev/fb* devices to access the hardware, so it's up to these devices'
permissions to set the access policy. It doesn't have to be the root user
to access the device. If I find there's something wrong with this, then
I'll see if it can be fixed up on either the kernel or the X server side;
naturally for patches to go anywhere running the most recent development
version is a must.
FWIW what I believe is the cause of X servers requiring `root' access are
the inadequate software interfaces (or the lack of them really) for x86
hardware -- where poking at the port I/O space directly from the userland
or using system calls such as mmap(2) on /dev/mem (which naturally cannot
be made accesible to non-root users or everyone could peek at other
processes' memory) is consequently required to get at graphics hardware.
Or, worse yet, calling into graphics adapter firmware -- which has been
the trend over the years, used as an excuse for not documenting hardware.
Now that is a security disaster, isn't it?
But, well, if you find it helps you, whatever works.
I hold my nose
and use X.org servers on a few peecees that X11R6.4p3 doesn't support.
(I do insist on fixing the cursor extension bug, though; it's annoying.
Not that older X servers are perfect either; I found a crasher bug in
wide line drawing the hard way....)
Well, I've been sort of stuck with XFree86 3.3.6/X11R6.3 with dozens of
local patches applied for years with my DEC hardware. And it has crashed
recently anyway when I tried it with the PMAG-A aka MX mono framebuffer.
So I need to move on, though maybe for the time being I'll just take the
path of least resistance and make yet another patch to make it work here.
But, yes, consider it a warning to look into it before
just assuming
that the support will (a) be there and (b) be non-bitrotted. [...]
Honestly
I'd expect dumb frame buffer support to just work, as there
isn't much there to break or maintain.
You'd think so, but in a day when "everything" has a (by our standards)
high-end 3D rendering engine in front of it, it would not surprise me
if dumb memory-mapped framebuffer access had bitrotted. Indeed, one of
the things I fuzzily recall is that X.org requires the device-dependent
layers to support some kind of acceleration framework (the alphabet
soup that comes to mind is "XAA", but I could be misremembering).
I think these days Linux is booted more often in the frame buffer rather
than the text console mode, so I'd expect people at least sometimes to run
a dumb frame-buffer X server, in particular with graphics hardware for
which native support hasn't been implemented yet. Since the turn of the
century it has been always been the case for me that new graphics hardware
found in x86 systems was not supported with X natively. There weren't
that many pieces actually I dealt with, maybe three -- as I avoid hardware
upgrades like a plague -- but still I think this tells something (and is
actually one of the reasons why I avoid).
A pixel array
and a RAMDAC handled entirely by the kernel via generic
calls isn't rocket science after all.
No, it isn't. But apparently modern X has advanced enough that it can
no longer do some things X Consortium sample servers from three decades
ago could. It's a bit like monitors: apparently flatscreen technology
has improved to the point where monitors can no longer do what CRTs
from multiple decades ago did routinely.
Hmm, the only feature of CRTs I'm missing is the greater flexibility in
resolution selection. And then even that only a little bit, as it was
only monochrome monitors that could truly adapt in an analogue way and
display any resolution requested with no loss of quality. Colour ones had
the screen mask which made some resolutions look better than other ones,
which was no different to LCD's pixel approximation -- about the only
issue I have with flat panels.
And then there was the moir? effect, colour alignment problems, various
geometry defects, all of which could in theory be corrected with
higher-end devices, but the process was painful, time consuming and had to
be repeated for every resolution selected separately, sometimes hitting
the limit of stored individual resolution settings supported by a device.
To say nothing about their weight, space and energy consumption and heat
produced. And phosphor burnout. And vertical refresh rates causing
flicker. And hard X-rays at the back. No, on the second thoughts I'm not
missing them at all.
I've been running my favourite 80x37 text mode (720x592 pixels) on 4:3
flat panels happily for years now and I had no issues with graphics modes
either. As long as analogue signalling was used, that is (DVI-D is
another story) -- well, with an unfortunate exception of one panel, whose
text-mode pixel approximation was plain horrible, but I take it it was
that model which was broken, not the technology -- the same text mode
rendered by a graphics card to the 1600x1200 resolution (native to said
display) and sent digitally to the same monitor produced excellent output.
I've been more disappointed with the demise of IPS panels -- they were
(and still are) so much more decent compared to everything else I dealt
with. And the disappearence of 4:3 screens -- the wide thingies are only
really good for watching movies and not doing work.
So what was there routinely in CRTs that you're missing in LCDs?
So if they
broke some generic parts (DIX) by the lack of due
attention, then I'm really concerned.
I don't think they did, but AIUI their DIX code expects the DDX code to
support a bunch of cr...er, stuff, that has no relevance if you don't
have a 3D rendering engine in your hardware, which may well mean that
the dumb-framebuffer DDX implementations have been thrown away because
they no longer work with the modern DIX code and nobody stepped up to
continue twisting them into nastier and nastier pretzels to accommodate
more and more DIX-layer peecee-world assumptions....
OK, so there might be replacement software algorithms missing from DDX
for what is done by some hardware. Oh well, if things are broken beyond
the ability to fix within reasonable time and with reasonable effort,
then, well, as you say -- there's always old code available out there.
I have two ISA graphics cards still used, one each in a pair of x86
systems too BTW. I even upgraded one of them a few years ago, by plugging
more DRAM into DIP sockets, to let it use memory interleaving and run at a
higher refresh rate. ;)
Fortunately, old X still works as well as it ever did.
Well, if it indeed does in the first place. ;)
Thanks for your insights!
Maciej