Monday, November 17, 2008

Rants and Laughs 7

It is time once again to see what is going on in the freetard community.

  • PC Authority has an exclusive interview with Richard Stallman. Stallman's comments suggest that he is still off his rocker.
  • Freetards have finally mau-maued Adobe into releasing a 64-bit Flash plugin. Reading the release, I get the vague impression that Adobe only did this to shut up them up. Um, Adobe, x86-64 is so last Tuesday! Where is my Linux ARM port? I love the reference to my predecessor BTW. 
  • Freetards cheer that the US Navy has embraced open source. First of all, the memo said that the Navy would adopt systems based on "open technologies and standards." This does not necessarily mean that they will adopt open SOURCE.
  • NEWSFLASH!! The Linux kernel has poor documentation! Kernel developers are unsure how to fix this. Linux wants better Release Notes; some suggest example code for new features (that would be helpful); some suggest scrapping most of the in-tree documentation. If kernel developers want better documentation, they will have to FULLY document what is already there and have the diligence to keep the API stable for longer periods of time. When you can develop a kernel module with your 2-3 year old copy of Writing Linux Device Drivers or Understanding the Linux Kernel, then I will say the situation has improved. Until then, have fun writing camera drivers with the Video4Linux1 API documentation aspiring kernel hackers!
  • Happy 25th Birthday GNU! Here is your birthday greetings from washed-up actor Steven Fry! It has been 25 years (well, it will be on January 5, 2009), and you still have not produced a complete operating system! How is the HURD and that Lisp Window Manager you wrote about coming along BTW?
  • Here is a list of all the games natively supported by Linux. Wow, 373 titles! That is like 1/100th of the number of games available on Windows. Also, it kind of fudges the number a little, since it includes every Linux emulator you can run a game on and every single Linux compatible Doom or Quake engine. Linux is truly the next generation gaming platform!
  • Another luser writes that Theora will replace Flash. Yeah right! If you can get Youtube to even support Theora playback, then we will talk. Oh, what's that? There are some issues with Theora that need to be 'ironed out' before it can present a credible threat to Flash? "Despite being supported by Opera and Firefox, Theora has a number of challenges ahead. The first lies in its performance -- both the encoding time and the video quality trail behind the common XviD/DivX-style MPEG-4 ASP codecs, let alone next-generation HD codecs like H.264 and VC-1. " Well, maybe you can get Nvidia to help you out?

  • Here is an article that lists the problems migrating from Exchange to OSS solutions. "One reason is that none of the open-source programs are really ready to serve as drop-in Exchange replacements. There's also some additional work that needs to be done, and it's not work that Windows administrators are used to doing. Even a veteran Linux administrator, though, might find setting up a full-powered Exchange replacement for a good-sized company a challenge. For example, Scalix 11.4 requires Apache, PostgreSQL, Tomcat, and either Sendmail or Postfix to be installed before it can work. That's not hard, but when you factor in the need for managing disk performance it becomes more of a problem. E-mail server applications, have trouble scaling, because of disk performance bottlenecks. To run a groupware server for more than a small business really requires shared disk arrays. Put it all together and you have a serious Linux system administrator's job, and it's not one that a former Exchange administrator is likely to be able to handle." TCO, the bane of lusers everywhere, has struck again!

27 comments:

Anonymous said...

Oi, Stephen Fry is awesome.

oiaohm said...

Linux Hater Redux really needs to learn to stay up to date. XvMC will be extended. It already has been. Also libva will go forwards. It requires DRI2 to operate correctly.

Incomplete homework theora any ideas where that fits in. http://en.wikipedia.org/wiki/HTML_5 Firefox 3.1 is already shipping with the support built in. Question do you depend on a plugin or some native feature of browsers?

Also do homework on USA Navy they do run a lot of open source project already. Apparently Linux Hater Redux cannot read or don't understand USA Navy. "open technologies and standards."

Open technologies means they will want to have access to the source code. So yes higher preference will be for open source.

LOL http://www.icculus.org/lgfaq/gamelist.php?license=free is not the list of all games for linux. Its a list of games that icculus.org coders have been in contact with that are free or open source. Fastest way to show you that you have a problem to a search on that page for secound life. It is not there. Yet it has a native Linux version. Any game ported without the help of icculus does not show up on that list.

Leaves out a key one. Some makers don't port there games to Linux because they make wine work to run them. Remember that some companies port by fixing wine.

Linux Kernel Driver Documentation not that simple to fix at moment. Major reason why user space drivers were made work upcoming problems were see able.

Read Copy Update system inside linux is being replaced because its defective. Locking over the complete kernel is being redone.

Things get a lot more tricky when you are talking about systems with 4096 cpu cores. Current alterations are about going past that limit to 16000+ cpu cores for 32 bit machines and 255000+ cpu cores for 64 bit machines in like a single motherboard setup. 4096 cpu core systems exist now. So limit had to be expanded. Windows 7 is only taking about 255 cpu core systems and they are having to alter key things. Prepare for driver from Vista not working in Windows 7 because of it.

Currently they hope 16000 and 255000 core limit will last a few years. Thinking that the change from 1024 to 4096 only lasted 3 months before machines built to limit existed it might be wishful thinking. PS 4096 fits in something the size of a normal single door fridge. And largest case on market for a computer is a shipping container. 64 bit size might or might not last long.

Problem with being a super computer Kernel things are always being pushed to limits.

Price of a major internal API shack up documentation that worked for years don't work any more.

Anonymous said...

God RMS makes me laugh. Where is the customary beard photo in the article? Too much grey?

Bet they'll swiftboat NV as usual either way!

Running games via wine is a mugs game. Your graphics card will have to be much better quality for Linux than for the same game in Windows XP. I know , i've tried it.

oiaohm said...

kerensky it depends on the game and the drivers.

Some games same and lower cards are better on Linux.

Major limits come from X11 itself. No 2d acceleration hurts. DRI 2 sees that disappear for a lot of drivers. Gets even better when Kernel Mode Switching lands.

Take a close look at http://www.freedesktop.org/wiki/Software/vaapi diagram.

You should notice something important a straight path avoiding X11 and DRM subsystem completely. vaapi interfaces connections don't have to be just limited to video.

This is very much http://www.tungstengraphics.com/wiki/index.php/Gallium3D . Currently Wine is forced to wrap everything to opengl and feed down threw X11. In future that limitation will be gone.

DRI 2 drivers for Intel cards avoid having to do multi able instruction translations as what was required in DRI 1. Also mesa3d.org git version is also doing short cuts like libva.

Nvidia will no longer be alone in the great X11 bypass stunt.

There is quite a performance difference in a lot of places due to the alterations. Also way memory is transfered from kernel mode to usermode and back again has also been altered over quadding the transfer rate.

Nvidia feature release for video acceleration was catchup if DRI 2 had released when was planed.

The problem Linux Hater Redux has is a love of taking things out of context.

Change to DRI2 has been a hard road. Alteration changes Linux video card interfacing in more ways that most people understand.

Anti-Tux said...

Linux Hater Redux really needs to learn to stay up to date.

That article was written four days ago. I highly doubt much has changed since then. Yes, apparently MPEG-4 and H.264 support are present on VIA cards, but who the fuck still uses VIA anyway?

"open technologies and standards."

You might want to check to see if you have a reading comprehension problem. Here is a selection from the article, "However, it's important to understand the difference between "open source" and "open standards." The two are not identical, even though they are related. . . . Open standards, by contrast, typically refer to the ways in which computer hardware and software communicates with other pieces of hardware and software. The fact that you can use your WiFi laptop anywhere, without having to worry about whether you're using the Toshiba, Dell, HP, or Apple version of WiFi, demonstrates the usefulness of such common standards. These standards are often developed and approved by international committees and standards organizations, and the process can often get political or difficult. Moreover, companies often try to fudge their adherence to such standards, either by failing to comply with them completely, or by extending the standard in a way that only their equipment can use. But overall, open standards promote interoperability, which gives customers the freedom to choose from a variety of vendors."

I took this to mean that they would be deploying technologies that do not have built-in vendor lockin. That does NOT necessarily mean they will deploy fully open source systems. The Opera Web Browser fully embraces open standards but try getting access to the source code. WIFI is an open standard, but that does not mean that wifi cards must have open specs or open source drivers. TCP/IP is an open standard, but that does not mean you can gain access to the source code for the TCP/IP stack of Microsoft Windows (last I heard, at least one of the services had deployed their own proprietary networking protocol, but I do not remember if it was the Navy).

LOL http://www.icculus.org/lgfaq/gamelist.php?license=free is not the list of all games for linux. Its a list of games that icculus.org coders have been in contact with that are free or open source.

That list also lists Doom 3 and Quake 4. Sweet! Where can I download the source code to their engines?

Fastest way to show you that you have a problem to a search on that page for secound life. It is not there. Yet it has a native Linux version. Any game ported without the help of icculus does not show up on that list.

How much help did icculus give porting Angband and Nethack to Linux?

Some makers don't port there games to Linux because they make wine work to run them.

Yeah, because that totally works. The problem with depending on an incomplete, reverse-engineered emulation layer is that you never know when it is going to break.

Linux Kernel Driver Documentation not that simple to fix at moment. Major reason why user space drivers were made work upcoming problems were see able.

I don't care how hard it is. FIX IT!

Things get a lot more tricky when you are talking about systems with 4096 cpu cores. Current alterations are about going past that limit to 16000+ cpu cores for 32 bit machines and 255000+ cpu cores for 64 bit machines in like a single motherboard setup.

16000+ cores on a single die? The best offering from either Intel or AMD is a 4-core processor. Sure, there is the Cell, the T1&T2, and the picoChip, which is the only really impressive one, but it seems highly domain specific (DSP core).

Problem with being a super computer Kernel things are always being pushed to limits.

Price of a major internal API shack up documentation that worked for years don't work any more.


So why do I always hear about Linux having problems scaling compared to Solaris & AIX? Plus, why the fuck should I care how good of a supercomputer it is? I would only want to use Linux on my desktop! Maybe what is finished is the retarded idea that one kernel can be all things to all people.

oiaohm said...

open technologies to USA navy is at least source code inspection. So MS still can get in there if they let USA navy audit there source code. Yes MS provides this to a lot of governments to allow them-selfs contracts. USA NSA are even allowed to directly alter there copies of MS Windows source code and build new versions just for them.

This is the meaning of open technology.

You cannot create security and trust it with secrets you don't know.

Its not the open standards bit that says open source. Its the open technologies bit. Preference is giving to items if need they can build them selfs.

Lead developers at icculus worked on a lot of projects before they joined. So yes not all there work is porting. Go and check the developers that submitted code to Angband and Nethack then cross reference against icculus staff you will find a matches. That list is nothing more than a resume and you failed to tell the difference. Ok same with a lot of other people.

Funny enough the ones that use Wine for porting that are not included on icculus ship there own version of wine or in the documentation tell you exactly what version works. So your falls out window answer is crap in this case. If people cannot follow instructions no OS will do what they want.

I did not say single chip. Cores are what you are worry about with locks. More cores more trouble.

Largest number of cores in a Single chip used in supers is 8 in most cases. Over 8 you start to have memory manager problems.

So is 8 cpus per segment board of motherboard 8 cores per cpu. 64 boards total to give you 4096. Not that many.

Simply fits in a fridge size rack case with space for hard drives. Ok should of allowed people here don't deal with multi board motherboards.

Supers can use custom ordered cpus. For amount in a space x86 are a bad pick. x86 have a instruction translation engine that basically takes up the same space as a core. So yes a 4 core dia for a x86 really could be 8 core if it was not x86.

Solaris & AIX don't scale to 4096 cores. Solaris starts having trouble around 1024. Let alone where Linux is going now. Both AIX and Solaris swap over to clustering before getting close to 4096.

Linux scaling issues are what are being taken on. Got swept under a big myth for a long time. Everyone kept on blaming the wrong thing. Yep Its the scheduler Its the scheduler over and over again. Myth being that the scheduler was 100 percent responsible for performance. Not the problem. Its locks. Playing with the scheduler did not cause major API upsets.

Sorting out the locks also results in blocks of code being deleted for good.

Little hard to document when you write documentation you have to call Y this before doing X. When Y has been deleted in the process of lock removal. So now you can do X straight up without caring about locks.

There are complete restructures as part of lock removal. Basically at moment document writers don't standard a chance. I do agree at the moment deleting the old documentation and working from source for a while might be a valid solution until this mess plays itself out.

New documentation basically can be written when the API goes back into a stable state.

Same locking issues are linking to X11 freezing and other disk io issues basically lots of io issues that plague desktop users.

The idea that desktop users are different to super computer users is kinda wrong. Its another myth. A IO issue that effects desktop user minor amount ie less than 0.01 of a percent can result in when fixed on a 4096 way machine a 200% boost in threw put.

Each of those minor fixes do add up.

Linux kernel goes threw stable states and unstable state for its internal API.

We are currently in the major unstable state. In the past this use to cause a version change to protect users from it.

Now you most likely want to know why it has not. Distrobutions back porting patches reason why this time its not a new version.

Basically do you want the performance issues fixed or not. If you want them fixed putting up with incorrect documentation for a while is the price.

shevy said...

oiaohm, are you trolling?

On the one hand you attack LHR here, on the other hand you throw arbitrary statements like "Nvidia will no longer be alone in the great X11 bypass stunt"

WTF are you smoking in order to troll so exclusively? Also, please name how many of the current US top ten games have a native linux version.

kthxbye.

"New documentation basically can be written when the API goes back into a stable state."

I really do not know what the heck you are smoking, but YOU CAN WRITE DOCUMENTATION EVEN WHEN AN API CHANGES.

You try to make a pathetic excuse for incomplete documentation. Please oiaohm, if you don't enjoy the place here, simply find another spot to troll on.

I also wonder if english is your native language, for I have a hard time understanding what you write, especially in your last post.
WTF is this:

"If you want them fixed putting up with incorrect documentation for a while is the price."

Are you trying to sell to us that incorrect documentation is a price we have to pay?

You make me angry.

oiaohm said...

Nvidia driver for large sections of the opengl commands avoid X11 interface. Method used is only partly documented from the open source project Nvidia drivers are partly designed from. The bipass stunt.

What Linux hater redux missed was that Nvidia has to get features out first this time.

ATI and Intel have both been working going threw the X11 stack. This means for some opengl commands there opengl takes a command translates it to something that can be passed threw X11 that has to be decoded on the other side.

Altered DRI 2 stack sees this difference disappear. Does that explain why there is such a large performance difference. Pointing to libva say why is it not here. Simple reason its DRI 2 interface. libva also completely userspace nothing kernel side other than DRI 2. Yes the hardware interface driver they are talking about on libva is a user space driver. Yes simple case what it needs to work is not merged yet.

The major documentation that is out of date is guides for people starting out. Like basic how to write drivers.

Lot that is wrong is functions that have been removed due to not being needed anymore.

Before the end of this huge lock clean up even if you created the documentation correct now it would most likely have to be rewritten another 10 times.

Sections are changing from being control by the Big Kernel Lock to there own Section Locks even some of those Sections locks are getting broken down into smaller locks. Then others are changing from having locks to Read Copy Update interfaces.

Playing with the Complete Locking system of a OS is the fastest way to make the Driver API of a OS 100 percent chaos. Since almost every Driver API function is will be affected either directly or indirectly.

The big headache here is that fixing the locking system must happen or the bad disk io and other performance issues will remain.

Makes trying to write a basic introduction guide to how to create a driver more painful than bashing head against wall. As soon as you have written it you are basically having to to back and correct it.

Reason why at moment Linus has said make sure comment comments are good. They are kinda required at the moment.

There are times when documentation in the submit comments and the source code itself is the best way forward. The one advantage of the clean up of Locks. Writing drivers when done should be 1000 times simpler, cleaner and safer. No more will driver developers have to manually activate locks and cause dead locks by forgetting to release them. Even better less need for them to know about locks. Number of performance problems have been caused over time due driver developers activating the wrong lock is not funny.

Yes the its a little bit of pandora's box and Linux Hater Redux wonders why developers don't have a nice simple answer. Basically there is not one.

People have complained about the Linux Driver API and Locking being a mess. Start cleaning it up and break the documentation and then someone thinks ok lets complain about the documentation.

Simple case of not understand the size pandora's box Linux Developers have opened. The pandora's box of locking has destroyed the documentation for new developers.

oiaohm said...

Ok merged was the wrong word for DRI 2. Released yet.

Anti-Tux said...

The idea that desktop users are different to super computer users is kinda wrong. Its another myth.

Wow! If they are so similar, then they must have similar optimization requirements, right?

A IO issue that effects desktop user minor amount ie less than 0.01 of a percent can result in when fixed on a 4096 way machine a 200% boost in threw put.

WTF?! Did you even read your last sentence?

Basically do you want the performance issues fixed or not. If you want them fixed putting up with incorrect documentation for a while is the price.

Considering I DO NOT run a supercomputer, I will take the stable documentation, kthxbye. If I wanted 0.01% speedup, I would use Gentoo and set my CFLAGS to "-O3 -fomit-frame-pointer -funroll-all-loops -ffast-math -funsafe-math-optimizations" This is yet another case of lusers optimizing for 0.001% of Linux installations while fucking over 99.999% of users.

Anonymous said...

@oiaohm
If I understand correctly, you mean that the X server problem will be solved because each major hardware vendor will write its own version of X? Or at least a significant portion? Is that the way a sane operation system work?

You keep referring to specification which are non existent yet, whereas previous version have failed to achieve anything. For whats it's worth, Ubuntu isn't fully LSB compliant since it uses deb rather than rpms.

No documentation is not only a barrier to entry for new developers (and you should know quite well that OSS projects have very small number of developers, contrary to the popular belief), it doesn't allow the existing FOSS cult elite to maintain their knowledge base, which in turn doesn't allow them to improve their products since they can't actually tell what old parts of the code are doing. You may claim that seasoned developers can test those portions of the code, and remove them by trial and error, but it is both time consuming and idiotic.


In short, stupid engineering is not the price one has to pay for progress. It is the price one pays for being stupid.

In short, Free software and general and Linux in particular are going nowhere, never has been anywhere, and the only reason they got any attention was it was a good story for a while, but even this finally subsided when a new, cooler product (Apple), entered the arena.

Get on with your life, anyone else did.

Anonymous said...

I skip the obvious nonsense of the anonymous right above.

In short, Free software and general and Linux in particular are going nowhere, never has been anywhere, and the only reason they got any attention was it was a good story for a while, but even this finally subsided when a new, cooler product (Apple), entered the arena.

This looks like a conclusion in search of a question to me. Why do you and some of the original LHs are so keen on telling people half-truths or outright lies in an effort to make them stop contributing? What gives you the right to judge how people should spend their time?

You're obviously not a developer, judging by your lame assessment of the problems of X.

What is your motivation here?

Anonymous said...

Well, the linux crowd can't force their pathetic marketing techniques down my throat anymore, I've thrown their software away.

I don't need to be told what I should use, I hate that and so does every other Joe user.

People can do what ever the fuck they want with their time, that includes writing software, but to me linux is nothing more than a gimmick.

I don't care if I am on the end of microsofts rock hard 12" erection, atleast I get decent quality software AND (holy shit wait for it...) support out of it (and this is not being told to "fuck off and RTFM" or wait 10 years for bugs I filed to be fixed)

I'm going to go bathe in the glorious 3d that is fallout 3 and farcry 2, maybe you tards will get those 2 games working in 10 years, lets say 15 to be safe.

Anonymous said...

The idea that desktop users are different to super computer users is kinda wrong. Its another myth.

Wow! If they are so similar, then they must have similar optimization requirements, right?


Doesn't the fact that the linux kernel scales from modems to 4096-way supercomputers tell you something? The only setting I changed on my desktop is vm.swappiness, there's a reason it should be larger on a server. My desktop kernel is used in SLES after it has matured enough.

A IO issue that effects desktop user minor amount ie less than 0.01 of a percent can result in when fixed on a 4096 way machine a 200% boost in threw put.

WTF?! Did you even read your last sentence?


I hope oiaohm doesn't mind me rewriting this: An IO issue that affects a desktop user in a minor way, i.e. less than 0.01 percent, can result in 200% boost in throughput on a 4096-way machine when fixed.

Basically do you want the performance issues fixed or not. If you want them fixed putting up with incorrect documentation for a while is the price.

Considering I DO NOT run a supercomputer, I will take the stable documentation, kthxbye. If I wanted 0.01% speedup, I would use Gentoo and set my CFLAGS to "-O3 -fomit-frame-pointer -funroll-all-loops -ffast-math -funsafe-math-optimizations" This is yet another case of lusers optimizing for 0.001% of Linux installations while fucking over 99.999% of users.


If you're a user, why do you care about kernel documentation? If you're a kernel developer, you may always ask for help in the LKML. Does 99.999% of linux users even know what the kernel documentation is about?

Oiaohm, I think that both of us are wasting our time here.

Anonymous said...

Oiaohm, I think that both of us are wasting our time here.

Well, judging from the number and the length of Oiaohm's posts here, he doesn't feel like he's wasting his time. But, unlike you, he has some arguments.

thepld said...

"If you're a kernel developer, you may always ask for help in the LKML."

Brilliant idea! Now instead of just pulling up a handy reference and answering my own question, I get to supplicate myself in front of the kernel elite, who will most likely get tired of answering the same damn questions over and over (see: Any Freenode channel ever). Better yet, there's no guarantee I will even get help with my problem, as there's a good chance there may be functions known and understood only to Torvalds and Morton.

"Does 99.999% of linux users even know what the kernel documentation is about?"

You're right: They don't care. But they do care about the complete dearth of device support because the rug keeps getting pulled out from under the feet of the developers every time some random function is deprecated.
Under that same logic, since most users don't care about source code, having open source is useless.

oiaohm said...

people don't understand how minor performance errors add up. 0.01 errors a few 1000 add up to quite large performance loss. To fix them you must find them.

Now if you run a program that hits them all repeatedly yes ouch.

All a 0.01 is on a 2 core machine is a lock that should not be there that is just activated then released. Ok stoping other core for a nanosecond on a 2 core machine barely hurts. Stopping all cores on a 4096 core machine is ouchy. Large numbers of cores makes finding some problems simpler. Basically what was the huge performance dip right there. The more cores desktop machines get these minor locking errors the worse they will be.

Basically the more cores you add the minor performance error that are hard to find that are upsetting users have a super large spot light on them.

vm.swappiness is avoiding a locking bug. Thank you for sweeping a bug under the carpet.

Funny enough Ubuntu is LSB certified. Only requirement LSB packaging requirement is that it must be able to install a lsb rpm package. What it can. Common myth that to be a LSB certified that the distribution has to run rpm everywhere. People forget Debian has been LSB certified for years.

Stupid engineering got Linux into is current mess. Locking system of Linux was never clearly documented. Operating on faith and false believes that it was not that important.

All the out of date new user documents have a lot of errors in them as well even when they were current. Some of there instructions made the current day locking mess. Reason why I said basically delete them and go back to in source code documentation and submit comments. For sections that have been fully cleaned up and developers are not still looking at if X should be a lock , multi-able locks or a RCU anymore recreate new documentation clean and valid. Past particular points there is no other valid option for documentation.

Basically the complete kernel source has to be audited lock by lock. Would have been so many times nicer if someone had of writing good tutorial in how to use locks correctly. O wait they did and no one read it instead they read how to write drivers that was wrong.

Yes some cases Documentation is more of a curse than a help. This is one of those times. Its the other major problem with programmers they have the bad habit of not reading it. Turns out that is why MS had so much trouble with the EU they had documentation for there protocols there developers had not used them in years. Documentation ending up worthless is nothing unique in the software world. Past particular points you just accept them as dead and move on.

DRI2 redesign alters how everything works. Massively alters. I will explain how it kinda kicks the stuffings out of Nvidia.

Nvidia you have to replace the driver to use a new version of its opengl libs or it fails. DRI2 was designed completely to avoid this issue. Since the basic driver in kernel space really knows nothing other than how to manage the gpu access, gpu memory and enough 2d to startup the system. The userspace opengl can send raw GPU commands threw to the video card. Even better directly to kernel then direct to GPU avoiding X11 DirectFB or any other windows management system you are using. So no more speed bottle necks of having to get answers from X11.

So yes DRI2 allows updating you opengl independent of your drivers. Allows features like video acceleration to be done without any extra kernel support. Or any other creative or future use of GPU.

So yes DRI2 opens up the option of someone wants to code it a native Direct X on Linux.

Now this cures your common complaints. Of we don't want to have to restart X11 or touch command line to deal with video drivers.

Even better you update opengl and the next program that you run gets the new version. No more restarts. It truly just works. Even more fun the memory control allows running 2 different versions of the opengl's side by side without them conflicting with each other. Totally not possible with Nvidia ever notices how it hunts down any other opengls on the system and replaces them. Because if it don't things can go badly wrong.

DRI2 has included DRI1 emulation that still needs speed improvements because its currently slower than DRI1 it emultates. So yes you can still use old out of date X11 until you update on a DRI2 system.

Next hibernation. Nvidia still does not support that. So yes it really suxs at the moment you either have to have ati or intel so you can hibernate your laptop and put up with the performance problems of DRI1. So more than looking forward to DRI2.

Nvidia did the right thing at first of bipassing X11 but sitting on there ass for the last so many years and not redesigning there driver to be more user friendly cannot be overlooked.

Now Nvidia releases something flashy GPU video decoding sorry that is not even new. Some projects have not waited around instead do it threw CUDA.

Major one Nvidia runs lot of code in kernel space. Intel developer found out why when reworking DRI. Coping from user space to kernel space was massively slow on 32 bit machines. Yet on 64 bit machines it was quite fast.

Say hello bug everywhere. Windows Mac Linux all used it for 32 bit. Next Linux kernel will not. A zone of memory allocated for userspace and kernel space. Like Mac do a 2g/2g split so 2g goes to users space 2g goes to kernel. Windows also has a split like that. This is another case of people following the same document how to design a OS. Problem it was wrong.

Problem these forces coping between the memory spaces. From the times of 286's the x86 page tables and other processes equals have supported a non continuous kernel space. So instead of coping all you do is change the memory pages permissions form userspace to kernel space or even just make it read only in userspace or kernel space depending on what way you are sending and have the memory manager do copy on write so it does not appear locked. What happens to be a lot faster operation. Removes the split issues of running out of usable ram when there is still ram to use as well. Also cuts the ram used between user space and kernel space.

Now this is the major problem about going solo in designing something like a Video card interface. It is really handy to have CPU designers on hand to tell you hang on that is a method bug there fix that problem goes by by. So avoiding the mother of all hack code.

Of course everything is still not perfect. Kernel Mode Setting is still missing. This is a major stability thing when that is added as long as the kernel is still running interface will still respond to a console change and work. So X11 locks up able to change to another X11 or directfb or console to correct issue. So I do expect to see some smart distribution hide a taskmanager equal under control alt del. You can connect a console to it.

Also makes Linux Kernel Panics work like Windows XP blue screens. ie You will always see them not be sitting there wondering if its busy or if its dead.

Fun bit about kernel mode setting opens up the way for things like X11 server to be run as a window inside framebuffer or vice verser.

Linux Graphical environment is never going to be the same again.

If you can see how tech links up DRI2 also opens the path for Virtual Machines to pass access to full real video card to a contained OS and have the virtual machine contained applications integrated cleanly into your desktop.

Anonymous said...

But, unlike you, he has some arguments.

I'm not a developer, I cannot argue at his level of expertise. However, I can see when others claim to know things they know nothing about. This includes LHR and many others here. In order to be able to say whether "linux sucks" or it's a "failure" etc., you need to know first what you're talking about. Saying that "linux sucks" because it doesn't work like Windows, because X has problems, because it hasn't made the mistakes others did or because MS marketshare is larger, is one not very clever thing to say, to put it mildly.

@oioaohm

vm.swappiness is avoiding a locking bug. Thank you for sweeping a bug under the carpet.

Excuse my ignorance with regard to kernel matters, but I really don't get this: how else can the kernel know whether large files should be stored in the caches? For some workloads using half the RAM to cache large files makes sense, for other workloads it doesn't. How can the kernel know this?

Anonymous said...

@ Russell,

I don't need to be told what I should use, I hate that and so does every other Joe user.

Agreed, since linux doesn't fit your needs, you shouldn't use it. There's nothing wrong with that. Linux is just a tool.

Anonymous said...

Doesn't the fact that the linux kernel scales from modems to 4096-way supercomputers tell you something?

Yes. It tells me that a desktop system able to provide a decent user experience is much, much more than a fucking kernel. That's what freetards will never get.

Anonymous said...

Saying that "linux sucks" because it doesn't work like Windows, because X has problems

So if X crashes all the time i can't say that Linux sucks? I, as a user i'm allowed to say that. Deal with it.

thepld said...

"Agreed, since linux doesn't fit your needs, you shouldn't use it. There's nothing wrong with that. Linux is just a tool."

That's a mature and reasonable statement. The problem, and the fact that evokes rage in the hearts of Linux haters everywhere, is that there are legions of immature and irrational Linux zealots accusing you of stupidity and, in some case, irredeemable evil for using a tool that works for you.

And they have so infected the tech media of slashdot and else that it is nearly impossible to get an accurate criticism of Linux. In the worst cases, Linux "evangelists" (which, appropriately, is also a term for religious fanatics) will outright lie to you in the name of software freedom. You may as well ask Pravda to write an article about problems with Marxism.

Anonymous said...

http://www.kaourantin.net/2008/11/64-bits.html
•The first 64-bit plugin for a browser we ship is this Linux version. Windows and Mac will come later.

In other words the unwashed luser hordes made adobe's site so smelly they released 64bit to get rid of them!

Anonymous said...

The first 64-bit plugin for a browser we ship is this Linux version. Windows and Mac will come later.

Great. Also, do you know it is an alpha release, unstable and full of bugs?

It´s oubvious, such thing would be first released for a bunch loons who doesn´t care for stable software since almost every software that runs on linux are alpha or beta.

That would never works on Windows or Mac OSX world. People who use those OSes like stable and fully working software.

oiaohm said...

I love that Freetard don't get desktop is more than kernel. So badly wrong its not funny.

Sorry we do get it more but some of it is kernel. Few issues here Linux kernel was not optimized to desktop use. Funny enough that is not 100 percent different to super computer kernel.

Differences Desktop Kernel Must provide good video card support and not leave users hanging. Super Computer kernel it don't care that much.

Desktop Kernel Must provide broad hardware support. Super Computer can get away with limited targeted hardware.

That is the complete difference at kernel level. The DRI2 and KMS patches going into the Linux Kernel closes the gap between desktop and super use. Yep 2009 Linux kernel at long last comes what you can call a true desktop supporting kernel.

This shows how stupid people are. Windows Vista and Windows 2008 server both use exactly the same kernel. Windows XP 64 bit and Windows 2003 server use exactly the same kernel.

Windows 2000 server and Windows 2000 workstation. MS was kinda truthful back then. Ever since MS basically has be lying to people just to make more money.

Only thing different is settings and version tag. So why cannot Linux use the same kernel everywhere. MS does.

Please go read the Freedesktop.org project its about sorting out the end user experience. It does not worry itself with the kernel side that much at all it mostly the user space side. X11 started of as a huge mess from the Unix days. Hard bit for you to get is that its more organized its ever been. Ok is the open source desktops perfect no they are not. Over 12 years of effort to clean up a mess over 20 years in the making. Freedesktop dbus and policy kit projects are about sorting out configuration of the computer from graphical in a secure yet distribution independent way.

vm.swapness default is wrong for what would be ideal for supers as well due to a bug. Section of the swap processing code use to lock all cores while it completed. So the default value is a work around.

Value of vm.swapness correct alters with the number of cores you have. Due to the price of the bug. So if you want to call that value a hack it is.

Issue with X11 crashes why you cannot blame all Linux. Is that a lot of those crashes are distribution Dependant for them putting unstable applications in your path. Start learning to be a little more truthful. ie Ubuntu crashing all the time on you then yes Ubuntu Sux's not the whole Linux world. Only way we can start sorting the good out from the bad is if people stop using generic answers.

What is annoying lot of Open Source people more is we listen to your problems. Design systems to fix them. Start building them yet still keep on getting complained at and no feedback into the designs that will be put in place to fix it. Then we have overlooked something and get yelled at again. A never ending no win. You want better step up and be contructive.

Many days Open Source people wish they could just wave there wand and its fixed. It would be nice if instead of just hating person did some positive research.

Ok we have X problem. Y group is working on it. This kind of advertisement has positive effects. If users know about good upcoming features they know to ask Distrobutions about them that normally see them add them or work on them.

Now of course being critical of distrobutions for including stuff before its ready also has to come to the floor and they be beat up for being that stupid. What happens now is you just beat them all up so the good ones don't get the backing they should.

Anonymous said...

oiaohm, you keep missing the whole point here. Let me try again, I hope it will do some good.

"This shows how stupid people are. Windows Vista and Windows 2008 server both use exactly the same kernel. Windows XP 64 bit and Windows 2003 server use exactly the same kernel."
Yep, they use one and the same kernel. But hey, Windows kernel is designed with that idea in mind, and it is designed well. And it very well documented.
Two things that Linux kernel obviously lacks.

"Issue with X11 crashes why you cannot blame all Linux."
If course we can, and we will. If X11 is so poor, then why the hell Linux adopted it on first place, and why the fuck it keeps building on top of it? You keep patching it over and over again, only to wreak further havoc.
For 17 years you could have come up with something much better, for God's sake!

And now you only keep pouring excuses, and putting the blame elsewhere... Why don't you just face the failure, and come up with a solid plan to overcome it?

Anonymous said...

"What is annoying lot of Open Source people more is we listen to your problems. Design systems to fix them. Start building them yet still keep on getting complained at and no feedback into the designs that will be put in place to fix it. Then we have overlooked something and get yelled at again. A never ending no win. You want better step up and be contructive."

RTFM does not help nor does GIYF. Writing clear documentation would help.
Yes as LH said "writing documentation sucks donkey balls". But the luser community is composed of anti-social freetards so....