Wednesday, November 5, 2008

It's the Applications, Stupid!

Here is my first user-submitted rant. This rant actually appeared as a comment on ESR's blog long ago, but I just got the permission to post it. Here it is:

It doesn’t matter. None of this matters. Platforms don’t matter. It’s what’s on the platforms that matter.

This is a lesson which I forgot when I went on a gaming sabbatical (coinciding with my exploration of Linux and OSS), but since I got back into the ol’ past-time it leapt back into my brain with the force of an epiphany. In fact, this is something which any gamer knows, though perhaps just implicitly. And any Sega fan, such as myself, has had their face rubbed in the fact to an extent which is painful. The story goes something like this:

A Prelude to War

When the Sega Genesis first came out in 1988, it faced quite an uphill battle against the entrenched NES, which had managed to become practically synonymous with the word “videogame” after it’s 1985 release. Indeed, to this day, many people say “nintendo” when they mean “videogame,” just as many people say “xerox” when they mean “photocopy.” The Genesis was certainly more powerful — it’s primary processor was 7 times as powerful as the NES’ — but power does not conjure good games out of thin air. And without good games, a console is no more than a paperweight. Sega’s previous console, the Master System, was 3 times as powerful as the NES, but since it’s 1986 release, it only sold 13 million units to Nintendo’s 60 million, simply because it didn’t offer a compelling library of games. Sega learned from this mistake, if only once.

When they launched the Genesis, courtship of third parties was intense. They were willing to offer developers better licensing terms than Nintendo, who was enjoying monopoly status at the time, and managed to do what, at the time, was the unthinkable: they fought Nintendo, and in the US market at least, they won. In large part, this was due to Sega taking a chance on an unknown startup that was desperate for a platform for their football game. Nintendo simply wouldn’t offer the little corporation terms it could survive on, and besides, the NES was ill suited to doing sports games justice. That little company was EA, and the game was Madden. Both became smashing successes.

With the help of this and other games, including some in-house smash titles such as the Sonic the Hedgehog franchise, Sega exploding onto the scene to history altering effect. To put it into perspective, the success Sega experienced would be like Apple gaining 50% marketshare upon the release of OSX. Even more mindblowing, this growth was coming at the *expense* of Nintendo’s installed base. By this I mean that old Nintendo users were abandoning the NES platform and buying Sega systems in droves. Though Sega’s hyper-clever marketing probably didn’t hurt (slogans such as “Sega does what Nintendon’t” still make the ears of any elder gamer perk up), it was the plethora of games that were only playable on the Genesis which produced this success.

It’s On like Donkey Kong

After three years of hemorrhaging market share, Nintendo fought back with the technically superior (save for processing speed) SNES in 1991. And while the SNES did absolutely everything correctly, and has rightfully earned it’s place of high regard in the annals of gaming, it completely and utterly failed to unseat the Genesis. In Japan it’s marketshare ended up exceeding Sega’s, but in the US it lagged, and Sega enjoyed reigning champion status in other parts of the world.

This was the dawn of the “console wars” as we know them today, and the 16-bit era is still regarded by some (likely through nostalgia tinted glasses, but hey, we’re only human) as the halcyon era of gaming. For every top-notch exclusive game that the SNES had, the Genesis had one as well. And so long as the game libraries of both platforms looked equally compelling in the eyes of the consumer, the entities were mostly locked in a dead heat. But time always marches on.

A Taste of Things to Come

It had been half a decade since a new system was released, and consumers were ready for the next generation. The arcades were taking business from the console market, offering an innovative and immersive gaming experience that the now underpowered 16-bit consoles couldn’t match. (Incidentally, Sega has been and still is a leader in the Arcade market.) The time was ripe for Something New — sadly, both Sega and Nintendo seemed to have forgotten the lessons they had learned from their battles with each other, a mistake which ultimately proved fatal to the former.

It all started in 1988, the year of the Genesis’ release. At that time, games were provided on a solid state medium known as a cartridge, which offered fast access as a benefit, but provided very limited capacity, and cost quite a bit to manufacture. Nintendo had been looking at a way to address these shortcomings by moving to a cheap, high-capacity disk-based medium. However, Nintendo was not able to satisfactorily surmount the stability problem of magnetic media, nor the concomitant ease of piracy. But Sony had just the ticket, since they were working on a
then-revolutionary technology which would allow them to store data on CDs, which were currently restricted to just audio.

So it was that Nintendo contracted Sony to develop a CD based add-on system for them. And in 1991, they were expected to announce the new designs at the yearly CES expo — but when Nintendo president Yamauchi discovered that the contact with Sony would give the latter 25% of all profits off the system, he broke arrangements with them
in a fury. Instead, Nintendo contracted with Philips to perform the same task, but with a contract that gave Nintendo full control of the system. It was this partnership that was announced at CES, much to Sony’s chagrin.

Ultimately, the Philips peripheral never materialized. But Sony refused to throw out their work. They spent years retooling the foundation into a 32bit console called the Playstation, and, determined to swallow Nintendo’s marketshare whole (hell hath no fury like a multi-billion dollar Japanese corporation spurned), they aggresively pursued third party developers, and launched an ad campaign that was arguably more Sega than Sega in its edginess.

But I’m getting ahead of myself.

No Cigar, Not Even Close

Back in 1991, Sega was releasing it’s own CD based add-on to the Genesis, aptly named the Sega CD. It was quite the technological breakthrough, but it didn’t come cheap. And as has been established previously, a platform is only as good as the games on it: in the case of the Sega CD, this amounted to a big pile of suck. They even managed to create a Sonic game for the console that was, in effect if not intent, a turd with peanuts. Only 17% of Genesis owners ever bought a Sega CD — not a one of them doesn’t regret it.

Then, in 1994, Sega blundered again with the release of the 32x — a $170 add on which would turn the Genesis into a fully fledged 32 bit system. With the 32bit era imminent, the idea of gaining access to the future on the (relative) cheap was immensely appealing to many gamers. The console was pre-ordered on a scale of millions, but Sega completely dropped the ball. In a dash to make it to the holiday season, games developed for the platform were rushed, and many of them curtailed (the version of Doom found on the 32x has half of the levels of its PC version). The system was one of the biggest letdowns in gaming history (next to the completely unremarkable Nintendo Virtual Boy — a portable gaming system which failed to be either portable or provide entertaining games). This was the beginning of what would become an insurmountably bad rep for Sega hardware.

Don’t Tell me You’re Pissed, Man

In 1995, Sega then released it’s true 32bit console, the Saturn. They released it a few months ahead of Sony’s Playstation, and actually enjoyed an upper hand in the marketplace at first. Sony did not fight against Sega the way they did against Nintendo, having no vendetta to settle. But unfortunately, Sega begat its own undoing. For the release of the Saturn, with its quality games and good 3rd party support, was seen as a sign of abandonment of the 32x — largely because it was, in fact, an abandonment of the 32x. Almost over night, legions of Sega fans became distrustful of the company.

Completely unwittingly, Sony managed to swallow up Sega’s marketshare simply by not being Sega — and, therefore, appearing less likely to screw the gamer. The Playstation pulled far ahead of the Saturn, and Sega never made any real effort to combat this very real threat to their dominance — the hubristic assumption was that Sony was not a gaming company, and therefore couldn’t win. However, the larger market share made the Playstation (or PSX) more appealing to third party developers. And although the Saturn was a little bit more powerful, the Playstation was vastly easier to develop for.

The result was that third party support for the PSX outstripped that of the Saturn by an order of magnitude. A lack of quality games results in a dead system, and in practice, a lack of third party developers is the same thing. The death blow for the Saturn came when EA, a monolith in the world of gaming which owed its existence to Sega (and vice versa), jumped ship and declared the PSX as its primary platform. Quite ironically, the Saturn was now doomed. And although Sega’s next console, the Dreamcast, was perfection in nearly every sense of the word, and the first console to provide online gaming, Sega never effectively garnered the third party support necessary to survive. In march of 2001, Sega exited the console market.

I See you Baby

Flashback to 1996, and Nintendo is bypassing the 32bit generation entirely to release it’s N64, technically superior to anything at it’s time (although some people were and are turned off by its distinctively aggressive hardware anti-aliasing). Coming out behind the PSX, and still being cartridge based, it couldn’t quite capture third party support the way the PSX did, but it managed to snag a marketshare equivalent to 1/3 that of Sony’s.

While Sony failed to slay Nintendo, the combined blows dealt to it by Sega and Sony demolished its monopoly position. There’s a lesson here that anti-capitalists could learn about the nature of free markets, if they happened to actually be interested in the truth — but that is neither here nor there.

What kept Nintendo alive was it’s stable of quality in-house games. Super Mario 64 is still regarded by many as the best 3D platforming game of all time, and Goldeneye stands unrivaled as the most playable and enjoyable adaptation of a movie ever. By contrast, Sega never had a proper Sonic game for the Saturn (apart from the lame isometric platformer Sonic 3D Blast, and the sucky racer Sonic R). Once again, the lesson is that quality games are the secret to a gaming platform’s success.

And so it is with the modern era. The Playstation 2 (PS2), Sony’s successor to the immensely successful PSX, rode the coattails of its predecessor to it’s currently unrivaled installed base of more than 100 million systems, giving it around 60% market share. The remaining 40% is split between Microsoft’s XBOX console (surviving because of exclusive titles such as the Halo franchise) and Nintendo’s Gamecube (once again surviving off of excellent in-house games, although now at the bottom of the totem pole in terms of market share).

So has it always been. And so shall it always be.

They’re Like Mopeds…

A lot of you have probably read this paper, called Worse is Better:

http://www.jwz.org/doc/worse-is-better.html

(If you haven’t, considering doing so.) Equally likely, you’re seeing a connection. Indeed, it would seem the ramifications of Worse is Better are incredibly far reaching, although I think the more general and correct statement is the following:

Technical merits are usually a lot less important than you might think.

Or, as I’ve said previously, a platform is only as good as what’s on it. A console is only as good as its games, just as a data medium is only as good as its ubiquity, just as an operating system is only as good as its applications. Empirically speaking, the technical merits of a platform seem to be a marginal factor (at best) in determining how it gets to a position of application dominance.

What this means is that when debating the merits and demerits of OSS vis-a-vis closed source in terms of potential for success, where success is defined as market share, it is generally pointless to bring up technical points. Windows is not popular because of Windows, it is popular because of everything that runs on Windows. Contrary to the original article’s opinion, Microsoft is absolutely correct to maintain backwards compatibility, because the totality of what runs on Windows is the “secret” to it’s success. Apple’s policy may be technically superior, but it hasn’t helped it get anywhere near posing a challenge to MS.

So Linux and Apple have faster releases than Microsoft? Big whompin’ deal. The debate over which system is better, or progressing more rapidly, simply does not matter. What matters is what people can do with the system, and for the desktop things most people want to do, Windows crushes all. In fact, if you look at OSS itself as a platform, than it’s an objective failure in the desktop market if the goal is replacing proprietary software. How good OSS is at producing quality software matters a lot less than how good it is at attracting software producers, and in that regard, it would seem to suck. There is a large range of computer oriented tasks that you simply *cannot* perform on Linux. And until OSS produces a game better than BZflag, it should be a self-evident fact that not only is not a silver bullet, it might barely be an arrow.

I Don’t Have the Answer, but I Know who Doesn’t

I use Windows, Linux, and Mac on a regular basis — I like Linux the system the most, followed by Windows, followed by the Mac (sorry, but I think the GUI is a weapon of mass gayness). But I actually spend most of my time in Windows simply because of the things I can do in it that I can’t do with the alternatives, or that I can’t do as cheaply, or that I can’t do as well, or some combination of all three. Microsoft has done an extremely good job of attracting the people who actually make a system worth using to their platform, and as a result, it fits practically every users needs. Hence its market share.

Of course, things change when you go to the backend, and sure, that’s partly because the requirements are different. But regardless, people don’t just put Linux on the web — they put Apache on the web. Or vsftpd. Or whatever. The fact that Linux has these highly sought things is what really makes it a success. The fact that these things offer the most generally popular price/performance ratio is why they are highly sought. The fact that OSS seems to be good at attracting developers of such things is why they are OSS. But it *doesn’t* mean that, even
if OSS is an inherently technically superior development model (and in the future I’ll make the case that that’s bullshit), it is destined to dominance. Reality is much, much, much more complicated than that.

Postscript

On an unrelated note, the GNU people can suck my cock. I don’t even want to think about the time I wasted drinking your koolaid. I hope Emacs becomes a sentient entity and bites every single one of you on your GNU/scrotum. And fuck VI too.

17 comments:

Anonymous said...

Priceless

I don't know if you guys have seen this entry in Mr. Seigo's blog

How the hell someone who committed the KDE4 crapfest dares even talk about Vista's failure is beyond me. My god, and I thought I had seen it all, but freetards never cease to amaze me.

What little respect I had left for him after KDE4, is gone now.

Anonymous said...

nice one. Especially the last lines. Whenever someone asks you "vi or emacs?" there is only one answer: I hate them both! With passion!

Anonymous said...

I'll bet the author was denounced as a troll in the original forum.

It has always been about the applications.

The OS is a distant second, and morality doesn't even get a seat at the table, despite the ravings of the foss messiahs.

Can't see how Vista is a failure.
Just because some 'tard says it, doesn't make it so. I had a bad impression of the beta I had tried, then never looked at it again until this past summer, because I was happy with XP and fell for the fud, since it echoed my experiences with the beta release.

But this past summer I was given a new system for work, and it has Vista Business, and I can see nothing at all to be disgusted by. It just works-for-me.

*shrug*

Plus, the work order plugin for First Class, our email system, doesn't work worth a shit under Linux or BSD, whether with the Linux First Class client or the Windows one under wine.

Works fine on the macbooks that the other guys have, however.

Go figure...

But that was the one reason why I'm not running Linux on my work system: It's all about the apps. Even if it's one small but crucial app.

I'm not about to run a virtual machine, or a terminal services session just to do my housekeeping. That would be daft.

Anonymous said...

Vista is a failure since IT ANNOYS the user/sysadmin more than the predecessor (XP) did during it's time. Also they did change many "cosmetical" things that make user base somewhat upset, so that explains many things 'bout it. So the sales are low in fields where microsoft has won in the past (right now, microsoft is selling vista basic via the OEM channel at a very low prices way more than the full package, where the real money is).

But, (in direct opposition with the freetards) Microsoft has learned his lesson, and is expected that Windows 7 will make the lives of many people happy again.

I personally hate vista much more than i hate linux, but at least, i know that Microsoft can at least fix the problems... In the due time of course...

That's something freetards will never do, since for them "Everything is Fabulous"...

(Come on... if you think that a command line is better than a gui, then you´ve some serious mental illness, to say the least.)

Anonymous said...

Don't update to the latest gcc b/c it provides some obscure function, dont make life hard for driver developers by having a driver framework that changes by the month, dont change from libass.so to libderriere.so on a whim, etc. FLOSS tards put more cocks in their mouth everyday. How do you guys manage it?

Anonymous said...

http://news.bbc.co.uk/2/hi/technology/7711211.stm

Anonymous said...

Getting them hooked early, eh?

Next thing you know it'll be all over that MS is competing unfairly with other free softwares :P

I didn't mention: I used vlite to remove the most annoying parts of Vista. Yeah they bury the controls under yet another layer of fluff, but I can deal with that.

I had similar gripes about XP when it spoiled the elegant simplicity of 2000. 2000 still is, in my mind, exactly what an OS should be and nothing more. It's been a downward slope on the GUI front since then, though the gubbins have gotten better.

oiaohm said...

Thank you for this. Yes its about the Applications. Yes Linux world knows this.

KDE4 funny enough is about exactly the same issue. If users cannot use exactly the same application on both platforms they are going to have a hard time converting. KDE4 is having a really hard coming to life. Major changes in designs are never painless.

Linux servers can pull of instant migration from one machine to another of services. So what is the greatest feature you really could give to the desktop without question. The means to do exactly the same to the Desktop.

X11 kinda stuffs that completely up at moment. You cannot run 1 X11 server per container. Linux 2.6.28-rc already has the means to suspend application to disk and restore them on another machine if all applications are running inside a container. Yep the one application you cannot run in there is the key X11 server yet. Call it the major road block.

Yep in future expect to see the day when a Linux desktop or laptop can just transfer there running applications straight to each other. Same could even apply to windows and mac since Linux can run inside those OS's.

Please don't say what about CPU. That solution already exists for Linux at a speed cost. Yes you can migrate instantly between CPU types if you are willing to tolerate a 50 percent performance hit.

This is next generation computing. Applications follow the user where ever the user goes. Also makes pirating applications simpler. So expect to see more commercial go into cloud computing where they can prevent pirating.

What would be the Next most powerful feature you could give users. Also already in development. http://insitu.lri.fr/metisse/facades/

Yep the means to redesign the interface of any application just threw cut and paste.

Applications is key. Even more key is applications suiting the way the user wants them to work.

Current issue lot of open source developers are useless at design GUI's and a lot of people good at designing GUI's are useless at coding. So the way forward is close that gap. Please be aware that even in Microsoft and Apple development those two parties are split. OSS for smaller projects don't have that luck. Then makes it worse that user who know what they want cannot explain or show to the developer exactly what they want.

By the way there are many open source games way better than BZFlag.

BZFlag graphics sux badly. Try out apricot some time.

Hate of VI and Emacs is truly common in the Linux world. I am one of the ones that never uses either.

Command line has been really hard to do away with. Lot of system admins use web interfaces like webmin so don't go near the command line on Linux. Yes there are a lot of people out there who try to avoid it.

Major reason why it has stuck around is the limitation on running more than one X11 server stably. Even worse X11 server crashing and taking the compete interface with it. So running a server command line only was stable and safe.

Now here is where the lack of graphical interfaces come from. If you have biggest market ie servers not using X11 why will you bother wasting the time developing GUI.

The correction of the X11 server is one of the key requirements for Linux to start changing in a major way. Please understand some distrobution developers are talking about doing away with the command line once and for all once X11 is fixed.

Its also the reason why there are a lot of great web interfaces for configuring linux. Http servers are stable compared to the X11 they have been living with.

Linux Desktop cause of lack of gui to configure stuff is a key defect. Once system admins can use Linux with trust running X11 there will be more reasons to develop graphical configuration tools.

Yes web and commandline Linux is fully featured to be configured by.

Anti-Tux said...

KDE4 funny enough is about exactly the same issue. If users cannot use exactly the same application on both platforms they are going to have a hard time converting.

Yes, LH thoroughly covered this topic in his K-Pride Week posts.

KDE4 is having a really hard coming to life. Major changes in designs are never painless.

So, why make the major change in the first place? Why did they not bring in the functionality incrementally?

Linux servers can pull of instant migration from one machine to another of services. So what is the greatest feature you really could give to the desktop without question. The means to do exactly the same to the Desktop.

How the fuck would this be useful for desktop use? The only use cases I can think of involve fault-tolerant servers and program state backups. If you are running some high-level physics simulation, and it crashes four months in, it would be nice to have a recent backup of the program state. For desktop programs, I cannot determine a use for this feature.

Yep in future expect to see the day when a Linux desktop or laptop can just transfer there running applications straight to each other.

Once again, how the fuck is this useful for the average desktop user?

Same could even apply to windows and mac since Linux can run inside those OS's.

Considering the wealth of killer Linux-only apps, every office dweeb and art fag will HAVE to use this excellent feature!

Please don't say what about CPU.

I wasn't going to.

That solution already exists for Linux at a speed cost. Yes you can migrate instantly between CPU types if you are willing to tolerate a 50 percent performance hit.

Wow! I can run the Linux version of Nethack on my Sun Ultra 5! This feature is so totally relevant to 99.999% of computer users!

This is next generation computing. Applications follow the user where ever the user goes.

My applications already follow me wherever I go. You see, I have a laptop and a smartphone, and I take at least one of them wherever I go. What does this allow me to do that I could not do before?

Current issue lot of open source developers are useless at design GUI's and a lot of people good at designing GUI's are useless at coding.

Yes, this is a common problem in software development. Usually it takes one set of coders and one set of designers cooperating. Usually this task is accomplished by paying them money.

Please be aware that even in Microsoft and Apple development those two parties are split. OSS for smaller projects don't have that luck. Then makes it worse that user who know what they want cannot explain or show to the developer exactly what they want.

Excuses. Excuses. I think you misunderstand something about me. I don't care why it doesn't work; I care that it doesn't work.

BZFlag graphics sux badly. Try out apricot some time.

From the screenshots, it looks almost as good as Oblivion, a two year old game! Seriously, you guys have degraded since Tenebrae 2, which featured Doom 3 like graphics and was available several months before Doom 3 was released. Of course, like most OSS projects, it was abandoned halfway through and nobody cared to develop it further.

Hate of VI and Emacs is truly common in the Linux world. I am one of the ones that never uses either.

Wow! You don't like text editors that were obsolete when I was born? I am so proud of all of you!

Now here is where the lack of graphical interfaces come from. If you have biggest market ie servers not using X11 why will you bother wasting the time developing GUI.

Again, I don't care why it doesn't work; I care that it doesn't work.

Please understand some distrobution developers are talking about doing away with the command line once and for all once X11 is fixed.

Yes, because that has worked out so well for Windows and OS X. Seriously, do you guys have to copy everything they do, even when they did it twenty years ago and have already changed their minds?

Yes web and commandline Linux is fully featured to be configured by.

Yes, you can fully configure a *nix system if you do not mind committing several 800 page manuals to memory first.

oiaohm said...

KDE 4 alteration was planed to be a slow conversion. Did not happen.

Ie 4.0 to developers so they could start converting there apps across was the plan. If that plan hand not been disturbed conversion would not have turned into a huge mess. 4.0 due to be targeted at developers only had no config file migration functionality either. Yep how to kick users hard. It was never intended for end users.

Issue is lot past KDE sub systems are not cross platform. There is no way to straight migrate from KDE 3.x to KDE 4.x. Now if KDE 4.0 had not bothered about cross platform support and stuct to Posix and X11 only platforms porting of the old interfaces would have been able to be done.

incrementally Nice idea. Was tried in the early stages of KDE 4.0 development. The level of changes need to move KDE 4.0 to cross platform are just imposable todo and keep API stable. Ok not imposable but would have required a large emulation libraries.

Like all areas using X11 information to reference windows from KDE core libs had to be changed to a platform neutral format. Issue here lot of KDE 3.x applications are using that. So to keep API compad would have equals a huge mother of a X11 emulation layer.

Lot of the features of 3.x missing in same named applications in 4.x are because of these issues. Most of those features will return after they are corrected to a cross platform operation. Most people never saw KDE 3.5 on windows yes it was worse stuffed up than KDE 4.0.

KDE 4.0 split was forced by technical issue and turned into a big problem by end users taking it before it was ready.

How is instant transfer useful. Laptop low on battery transfer across to desktop and keep on working. Same feature will allow suspending groups of applications to a USB key and inserting them into another machine and starting up exactly where you left off. Or even better suspend groups your current applications you are running to ram or disk so they don't eat cpu time when you have something more important to do. Then resume them latter.

Completely changes how you work with a computer.

Tenebrae did not exactly disappear there developers ended up in Nexuiz. One of the big issues of open source is not handing merging well.

Yes, because that has worked out so well for Windows and OS X. Seriously, do you guys have to copy everything they do, even when they did it twenty years ago and have already changed their minds?

LOL X11 alteration is not coping what Windows or OS X did. Distributions switching to GUI default is all about desktop use.

Doing away with the need to use the command line. Unlike the windows setup. X11 server will be able to shutdown its resource usage when no one is logged in. Also unlike windows setup services cannot be dependant on the existence of a X11 server.

MS bad design that they are having to hack around in vista so services can no longer snoop on what users were doing.

Linux/unix is not coping there screw ups. Linux/unix tried to avoid there screw ups ie crashing caused by way more complex video card handling. So try to keep everything X11 userspace following the idea of the Micro kernel solution. Yes over time somethings have proven to have no user space solution. Changing path of a large code base without breaking everything is a complex process. Complexity has made the fixing of this last over 6 years. 2002 it was basically confirmed the pure micro kernel path for graphics was doomed. Sorry Open Source world is not magical workers.

Of course most users don't notice they can log into most machines with X11 crashed remotely or using the SAK combination and recover the machine. Linux kernel was still up. The micro kernel solution. So stability to services of Linux has been protected.

This is where stability of Linux is funny. What they should be complaining about is stability of X11.

The alteration of Kernel Mode Switching has bounced around the Linux kernel tree since 1992 before NT was ever released. Its a importance thing. Linux server targeted no importance at all in merging that feature or even working on it So no its not coping. Simple case of Desktop not being where the money was for Linux Developers.

Major argument against including Kernel Mode Switching has been the unstable states windows and mac have got from it.

It is one of these cases where people followed the wrong path.

The kernel mode switching also had not made it when it was put up in the Unix world years before it too.

Also unlike windows or Mac the X11 alteration is truly allows more than one X11 server to run side by side and no flickering when changing between them. Even working on fancy transitions between servers.

So yes you could even multi able different versions.

Its really a stupid idea to say Linux is coping bits for Windows and Mac. Solarias was the first to prototype kernel controlled video switching.

You really do lack understanding of history. Features have been copied many times over. Linux does compiz vista copies. Yes there are cases that copies are done better than where they copied from. Feature coping is a natural part of the IT world.

Yes, you can fully configure a *nix system if you do not mind committing several 800 page manuals to memory first.
Wrong system admins avoid that too. Web interfaces many of them provide completely guided setups.

Wow! You don't like text editors that were obsolete when I was born? I am so proud of all of you!
Guess what the same. There were obsolete when I was born too.

Yes, this is a common problem in software development. Usually it takes one set of coders and one set of designers cooperating. Usually this task is accomplished by paying them money.

Funny enough no. Paying them money still does not stop dogs for being made. Largest issue is User to gui designer and Developer. Microsoft never has addressed this.

Gnome is hated by a lot of people as well yet they have a fully paid gui development team. So money is not the factor. MS and Apple have both produced dogs of interfaces at times.

Maya one of the most heavily used 3d movie making tools has a interface that would be at home in the 1970`s. This is commercial ware people buy and learn to live with.

Issue that has to be dealt with is the disconnect between users and designers and developers. Money is not a magic wand. Commercial coded software is also not a magic wand.

Now you say you don't want to know what the failure was caused by. Problem is that it is key to understand why the failure exists and what is required for it to cease to exist.

If you truly look deep enough lot of failures are across all platforms. Sux full interfaces is not unique to the Open Source world.

oiaohm said...

By the way CPU one is important more and more places Linux is running in netbooks and notebooks it is not using X86 processes. Its using arm and chinas designed mips chips.

Reason power effectiveness. Arm chip with about the same performance as a 3 ghz intel chip can run on the same battery for 6 times longer.

Would you not want a laptop that could run for 24 hours straight no problems.

Anonymous said...

Ie 4.0 to developers so they could start converting there apps across was the plan. If that plan hand not been disturbed conversion would not have turned into a huge mess. 4.0 due to be targeted at developers...
for as long as i know about SW, the number after the dot has always been just that: the minor digit in a version number, and if it was zero, it means it's the first release in a new (usually reengineered and or given new / better features) major series
this is a widely accepted convention in the SW world, on the other hand no convention says a X.y version number alone (unless accompanied by acronyms such as DR, developer relase, of course...) denotes a specific target for a release ...

Changing path of a large code base without breaking everything is a complex process. Complexity has made the fixing of this last over 6 years. 2002 it was basically confirmed the pure micro kernel path for graphics was doomed. Sorry Open Source world is not magical workers.
the X server actually does too many things, some of which are to be done in a userspace process (e.g: moving the mouse pointer, performing pick correlation), but others belong logically in the kernel's hardware management layer (and their presence in X traces back to an age in which graphics HW didnt have as much power, and unix kernels didnt deploy as much HW management specific code, as today's), some (e.g. input focus and redirection - without translation) can be excerpted and also put in the kernel (where it belongs, if one remembers that the kernel at least is a multiplexer / arbiter for devices - ALL of them - and not just a process scheduler) and lastly, some (like keymaps and keystroke translation) can be safely moved to the application used library

see zackary smith's FBUI - it moved the whole gui into the kernel
in doing so it was relatevely unsuccessful, because of other developers' narrow minded reaction and because the resulting graphics system' s capability were too limited to support all X11 fatures ( so toolkits are to be directly ported ) - but one thing it got right, it split the gui server into separate compact drivers, one for the display, whose multiplexing abstraction is the window, one for input (with focus management)

the problem is not that the open source world doesnt have "magical workers" - the problem is the X server itself' s structure, and the OSS world has wanted to keep it as it was up to today - doing just piecemeal updates, without ever reengineering the overall structure of the graphics stack as years passed - which would have given linux a sane GUI system

Major argument against including Kernel Mode Switching has been the unstable states windows and mac have got from it.
pointing to the failure of others (with different architectures and not making open source systems btw) is a weak argument...
illustrating problems inherent to the implementation of something analoguous in one's own project, and admitting to not be able to cope with them, would be more honest ;-)

Also unlike windows or Mac the X11 alteration is truly allows more than one X11 server to run side by side and no flickering when changing between them. Even working on fancy transitions between servers.
is there a point in having multiple X servers running side by side?
for multiple users, wouldnt it be better to have a single multithreaded server with different users served by different threads (at the session layer - the graphic context and ownership layer is out of the question altogether, since it belongs in the kernel)

Maya one of the most heavily used 3d movie making tools has a interface that would be at home in the 1970`s. This is commercial ware people buy and learn to live with.
i know Maya myself and many other people who have come in contact with it (and yes, use it proficiently for a living) but we agree on the fact that the UI is way vast but elegant and coherent, i.e. all but "70's" (90's at worst)

oiaohm said...

sillx for kde's complete history .0 has always been a developer only release. 3.0 and 2.0 and 1.0 kde were all the same.

With .1 or latter being for end users.

Version number patterns are not hard and fast rules. Kde has been constant about it. KDE 4.0 has new stable development API's but not all old features were ported to it at that stage. That is status normal KDE. Now issue here why did not maintainers for distrobutions know the KDE numbering system. Not like it was any big secret.

I know FBUI project it was not lack of vision. It was lack of performance and security and drivers did that project in.

People like to over look until recent versions X.org had its own independent pci stacks and the like to the rest of the OS. Sections that should have been shared and were not.

X11 alterations for DRI2 took time because lot of designs ended up worse performance than what X.org was already doing.

Funny enough what you are saying about independent sections for doing all those tasks are in X11 design.

Idea that the X11 design is defective is not exactly right. Its implementation was defective. Many parts that should have been kernel mode were not. Many parts that should not have been duplicated in user mode when kernel mode provided and so on. Worst was that X11 was using own internal process management. Yep single thread spreed between every segment inside its self. Sad bit FBUI when benchmarked was still working out slower on adv than X11 when multiable windows were in use. X11 is basically being cleaned up and made work as per its design and not duplicate what the kernel is doing.

Sorry to say a single multi threaded X11 server is a lot harder to perform security on. If each X11 server runs as the user using it without needing root ie parts that use to need root are now in kernel. If there is a defect in the X11 server your access stops at that user. Even security tracking is simpler so selinux and the like can lock X11 server from even accessing any file in the users account it should not. Threads are a lot harder to apply mandatory access control to.

Next issue is memory management it gets extremely hard to be sure you freed all the memory that a users applications had allocated with multi thread server handling many users. Allowing multi-able allows a new stunt got a application that leaks memory in the X11 server its allocation give it's own X11 server kill the X11 server when you are done with it to make sure all the memory is freed. Same can be done for applications you don't want snooping on your desktop. Chromium browser from google is using the same stunt for the same reasons.

Final issue if multi threaded X11 server goes splat and all users are using it all users are effected. Now if each user has there own damage is limited. We are after a stable system not another one that fails completely.

Windows suffers from this all ok or all stuffed.

Please beware that running multi-able X11 servers side by side can in time appear exactly the same as running multi-able virtual desktops side by side. Swapping between FBUI X11 and any other future rendering system all could can be seamless. These are major alteration to how Linux graphical systems work.

Yes there is a point to allowing more than 1 X11 server. User switching. Screen Saver's running in there own X11 as a restricted user.

Containers running different distrobutions of Linux on the same kernel. Even Containers blocking an application from seeing anything more about your system than you want it to know about. Yes a true solution to all these spyware/untrusted problems.

Major issue with Containers is userspace programs can be block from sharing memory to applications outside the container. So a application running inside a container cannot do a direct memory map to X11 outside it. Yet from inside the container it can talk to kernel. Kernel mode switching and dri2 with both you get around this problem.

Games being able to take full screen without having to alter the true screen resultion of the windows manager.



Most likely not even though about is thin clients using on server 3d rendering and server applications using the gpu for processing acceleration. There is a major need for many things accessing the video card at once. The alteration also allows FBUI to be used next to X11 without causing splat either. Allowing multiable X11 provide a good test platform.

I was laughing at the weak argument that was used to delay it for years. Along with promises that the usermode switching could be made work. Over time usermode switching between X11 and console and X11 to X11 just got more and more complex code.

Too many people tried to write X11 off as defective design only to produce a item that run slower and lacking the security.

Follow maya back threw history is gui was build in the late 1970's and basically has not changed. There are many programs like this.

Elegant and coherent are in the eye of the beholder. There are many programs that people call elegant and coherent that to another user is a pain in the but.

True way forward is allowing users to alter the interface to the way they work. Current 1 size fits all model really has to die.

Anonymous said...

"Windows suffers from this all ok or all stuffed"

Now I know for sure you have no idea about how Windows actually works. You're so wrong that there's no sense to right your wrongs.
You're pathertic. The eloquence can't hide the fact that you know nothing.

oiaohm said...

Sorry I am right about windows all or nothing. In XP one go into Direct X in windows there is no correct separation between applications.

If you do send a commands in XP to perform a operation to direct X to bring down interface for everyone is gone. Same as the current X11 issue.

Windows 2000 did not suffer from this because they audited what you sent to the direct X drivers. X11 with GEM is also doing the same.

Vista does correct some of these defects. It also adds some new ones.

There are many points in windows that if you know where they are can nail its multithread solution to wall in time. There is even a pattern to cause a internal memory leak that is not freed when the user logs out or closes process. In time this leak that cannot be cleaned up will trigger a all out event.

The very clear process separation all the way threw avoids the memory issue. Everything in kernel that is allocated by the DRI2 setup is linked to its controlling process. So if process is dead all that memory can be released.

Single server multi threaded you are tempted to share sections of data for performance reasons. Errors in the windows design come from performance being put over stability.

NT4 and Windows 2000 does not have the overlap issues that lead to XP and Vista being able to do the all fine or all dead. Reason they kept the same kind of separation in NT4 and 2000 as X11 is doing now.

I am sure you have no idea how windows really works. The windows design reads perfect. Issue is in implementation same as X11 difference from X11 is that in windows case critical never to break rules written in the windows design have been broken.

Lesson has been learnt. Doing a single multithread solution what allows shared memory between threads for anything that needs high stability is asking for trouble. Developers get too tempted to improve performance at the cost of stability.

Besides creating a process or creating a thread on Unix/Linux systems costs exactly the same amount of cpu time. Not like windows were creating a process costs more cpu time. So when it comes down to preventing rule breaking using multi-able processes makes perfect sence on Linux/Unix.

Anonymous said...

"And until OSS produces a game better than BZflag, [...]"

So what? If you can run your "better OSS game" on Windows/OSX as well, how is an OSS platform going to eat it's competitors' market share?

monk said...

Very nice article, highly insightful. Although, 98% of desktops run on closed source OS, if marketed well to both users and 3rd party developers, giving more importance to the latter, the Linux operating system can become a great success.

Linux inherently has the advantage that it is open source, which means, theoretically speaking, bugs and security issues will be found faster, and updates and patches are going to be released much more often.

It's 2011 now, and a lot of commercial software is available for Linux, which is a good sign