Friday, November 14, 2008

Gentlemen, we can compile him. We have the technology.

Today, esr posted his thoughts on the Linux Hater's Blog which, among other things, led to a discussion about binary distribution vis-a-vis source distribution. I will now share my thoughts on the matter.

Gentoo Linux was my second Linux distro (my first was Mandrake). I know source-based distribution can have many benefits especially when the distribution is designed to cater to it as Gentoo is. Before the Ricers descend on us, let me say that I know optimization is NOT one of the benefits; you can -funroll-all-sanity all you want, but it is likely to do nothing, crash the app, or make it even slower.  USE flags, when they work properly, and the options are actually supported, can be a great way to customize a system to suit your particular needs. 

When I was new, I thought this kind of customization was awesome because it let me only use what I wanted to use. Since I used plain Fluxbox (what fun would Gnome or KDE have been with Gentoo; plus it compiled quicker), I would disable any support for KDE and Gnome, since I did not use them, and if I did not, it would drag in a whole bunch of libraries, and it would take 10-15 hours to compile.  I do not want to think about the amount of time I spent messing with various USE variables to reduce the amount of 'unnecessary' dependencies for an app I wanted to install. Of course, I would have to remember to add the USE variable modification to /etc/portage/packages.use for that particular application, or it might screw up the next time I updated the system.

Ahh updating! I remember that well. Gentoo was a bitch to update! It always appeared to be simple: "emerge --sync; emerge --update --deep --newuse world". However, it took many hours, and the computer was rendered unusable for most of the time. Since it was such a pain, I would go months without updating the system; then I would have to go through hell because the developers changed a whole lot and I had to do the emerge world dance two or three times! Of course this was the best case scenario. If something failed to compile . . . 

Apart from the pain of installing and updating the system, I remember the rest of the time being a breeze. At its peak, around 2004-2005, Gentoo gave me the best user experience with *nix I ever had. Most applications just worked. I had very good things to say about its 32-bit chroot. I could compile lots of programs, and my desktop was still responsive. Multimedia support was bar none! Xine + MPlayer + XMMS could handle anything you threw at it. Gentoo's versions made it easy to add support for MP3, DVD, WMV, etc. I still have not found a better or more versatile pair of media players than Gentoo's Xine (for DVDs) and MPlayer (for everything else). MPlayer could play practically any file you threw at it no matter how corrupted it was; sure, sometimes there would be no sound, some sound, skipping, linear viewing only, random freezes, but it WOULD play). No matter how much experience I have with freetardism, I will never understand why the distros seem to be dumping these two great players for the utterly brain-damaged GStreamer (which is another rant for another time).

The packages seemed to work so well that, now that I think about it, I am not sure if I completely agree with Linux Hater that distro-maintained packages (or ebuilds in Gentoo's case) are NECESSARILY a bad thing. Maybe the package maintainers have more influence over the quality of the application than most people realize.  Maybe whatever Debian/RedHat and their followers do to produce binaries is the really fucked up thing. Of course, this site lists certain, uh, problems people have had with Gentoo, so maybe LHB's point still stands.

However, not all packages worked well. If I ever used ANY masked packages, I was preparing myself for a world of hurt.  Since masked packages usually featured a lot of masked dependencies, I would usually have to install a bunch of unstable applications just to run one I wanted. Often, I managed to install all or most of the dependencies, but the app I wanted or one of its last dependencies would not install or run properly, so I was then stuck with a bunch of unstable libraries with no obvious (to me) way to go back. This was actually a general problem with Gentoo. By emerging only my essentials, I thought I was getting a lean, mean optimized machine, but as I added new applications, I would have to add a bunch of new dependencies. When I unmerged that app, the dependencies would still be there which soon made my Gentoo system just as full of cruft as any other distro. Sure I could "emerge --depclean; revdep-rebuild", but I would first have to update the entire system (which I hated doing because it took forever), and sometimes the orphaned dependencies would trouble the install. Now, I know that some of this mess was my fault for installing unstable but 'shiny' applications. I wonder how much of the Linux annoyances are caused by the lusers themselves who scream for the latest ub3rc001 but highly unstable application? If most of Linux's usability issues arise from the demands of the users themselves, then FLOSS has a major systemic problem with mass-market adoption.

Now that I have reminisced enough, let's get back to my original point regarding binary distribution. Let's take the (admittedly anecdotal) information above and apply it to source-based distribution as a whole. This model will treat all maintainers of upstream projects as a single source-based distribution. This model will treat the package maintainers of binary distributions as the users of the source-based distribution.  Now, first off, we can see that the package maintainer's task is a bit harder since he does not have automatic dependency resolution. Sure, the project documentation usually lists its dependencies, and most heavily-used libraries are already packaged in most distributions, but it still does not beat good ol' emerge app.  Now, the packager is in the same boat at the Gentoo user. He knows the specific needs of the distro better than upstream knows them, but upstream knows the software better than the packager knows it. Like Gentoo users and their USE variables, the package manager can add patches to the code and configure it with various options to better integrate it into the distro, but sometimes the modifications will break certain assumptions upstream has made, and all hell will break loose

Which one should we trust: upstream/source_distro or the packager/source_distro_user? I think we should trust upstream more, since they know the code better and can better avoid doing stupid things. Now, the best solution is when the maintainer and the distro packager are the same people, since they can then develop their app with integration in mind. This is probably why FreeBSD, despite orders of magnitude less funding, always felt more coherent and polished than any Linux distribution.

Of course esr does list some relevant problems with binary distribution:
I actually used to build my own RPMs for distribution; I moved away from that because even within that one package format there's enough variation in where various system directories are placed to be a problem. Possibly LSB will solve this some year, but it hasn't yet.
First, why were you building RPMs? What advantages do RPMs have over plain old TGZs except signature support and various metadata that could be included in the filename? By only building RPMs, you were excluding all the other non-RPM distros for no appreciable gain.

Second, wasn't the Linux Filesystem Hierarchy Standard supposed to solve this by standardizing the system directories? Wasn't it released 15 years ago? If that didn't work, the the Linux Standards Base should definitely have fixed it, but it seems to have failed. If, after 15 years, you still cannot determine the location of a mail spooler or logfile, then OSS has a MAJOR problem!

Third, what exactly did you need in those system directories anyway? In your book, the Art of Unix Programming, you wrote about this
Often, you can avoid this sort of dependency by stepping back and reframing the problem. Why are you opening a file in the mail spool directory, anyway? If you're writing to it, wouldn't it be better to simply invoke the local mail transport agent to do it for you so the file-locking gets done right? If you're reading from it, might it be better to query it through a POP or IMAP server?
If you were looking for applications, couldn't you just use /usr/bin/env? I am sure that is present on any Linux distro worth mentioning. If you were looking for libraries, then maybe you should statically compile your program.

I know there are some downsides to distributing statically compiled binaries. They take up more RAM and Hard Drive space, but RAM and Hard Drive space are both really cheap nowadays. Even low end notebooks feature 2-3 GB of RAM and 300-500GB hard drives; the typical user has RAM, Swap and Disk space to burn. The other problem with statically compiled binaries is that it is a bigger problem to update a library with a serious bug or security hole. However, the major proprietary software applications for Linux also have this problem, and they seem to have done okay. If the software developer is halfway competent, he will be tracking the development lists of all the libraries his app depends on, and he can then issue an update as soon as a patch for the affected library is released. Plus, open source has the advantage that, if the developer is being lazy or whatever, anyone who cares can (theoretically) download the source code for the app and its dependencies and produce a fixed binary. However, all of these downsides melt away after the feeling of navigating to a project's home page, downloading the Linux binary, installing it and running it just like in Windows and OS X!!!

In short, binary distribution has many advantages over source-based distribution, and Linux crusaders would do well not do dismiss them.

30 comments:

oiaohm said...

More important question why have duplicate maintainers at all. Developers of the applications more often know what is needed than lots of maintainers out there. When developers can just straight release programs to end users lots of problems will disappear.

Double handling always equals problems.

Reason why LSB has to get to the point of allowing single binary for all distrobutions threw 1 or another means.

Gentoo is one of the rare distrobutions that don't alter source majorly. As you have to admit quality issues that other distrobutions have that Gentoo does not shows up.

Most distrobutions alters applications to remove thinks like MP3 and dvd encoding support and on. This alterations have knock on effects. Can cause a application to detect that its required dependences are there but when the applications use them they don't work. Reason for going gstreamer is simpler to repair after distrobutions had been at them than mplayer or xine.

Funny enough RPM was picked as kinda a middle ground. http://kitenet.net/~joey/code/alien/

Plain tgz does not provide enough information to convert into most package formats out there. RPM was selected for its convertibility. Problem is that it lacks function. Reason for the LSB 4.1 recreate.

Static linking has way more downsides than you dream. LSB 4.0 solves the .so side correctly.

Same with commercial solution http://www.magicermine.com/.

Big over look is address randomization. Static applications are not build to cope with all forms of that. .so files and .so using applications are.

So static is not a solution since it will fail on some distrobutions due to address randomization being done by kernel. Static application has to be build compatible with the address randomization in use.

Binary distrobutions are not a solutions. Source based distrobutions is not a solution. Long term solution is smaller distrobutions and more application from third parties ie Independent software vendors(ISV's).

Yes there are a lot of cases of shiny new feature doing in end Users.

Even worse Maintainers and package makers of distrobutions who should know better ship them as default.

Sorry to have to say some distribution don't ship with env in /usr/bin/env but in /bin/env others don't ship with it at all.

env executable is very simple for what it does.

Reason why for like env LSB defines /opt/application|makername/bin to hold applications application depends on so if distribution is missing them programs still work.

Arguments over file locations is more disputed when you look at webmin. Yes there is minor variation between Linux and Unix. Most follow the FHS system quite well. FHS does allow a some variation but is defined variation.

Lot of these faults are still current.

The solution is binary providing ISV's. So debugging becomes simpler and the idea of distribution can die.

ISV's providing large block of applications for Linux is something I have always marked down as a requirement for the year desktop.

Not just closed source but open source ISV's as well.

Open source projects release binaries for Windows. Yet they don't for Linux. That is a mark of a problem.

Looking to Distributions for solution when they are causing the problem is foolish. LSB has offered them 10 times to sort this out. Between now and LSB 4.1 if they don't sort it out it will happen anyhow.

Novell attacking Redhat these cross fights need to happen as well to sort out this mess.

The idea of peace between distrobutions has to end. Let the true last distribution standing start.

Anti-Tux said...

oiaohm, when I mentioned 'binary distribution', I was specifically referring to upstream distributing binaries.

Plain tgz does not provide enough information to convert into most package formats out there.

Who cares? tar and gzip are included in pretty much every distribution out there. Why do you need to convert to a different package format?

So static is not a solution since it will fail on some distrobutions due to address randomization being done by kernel. Static application has to be build compatible with the address randomization in use.

Oh wonderful! Lusers have come up with a brain dead 'solution' to the buffer overflow problem that breaks binary compatibility with a good number of important applications. ASLR is not the solution. If you are so scared of buffer overflows, then don't use C (or anything lower than or similar to it). In fact, if you are not writing a supercomputer application, a high-end graphics engine, or an OS kernel, DON'T USE C!

Anonymous said...

>First, why were you building RPMs?

I was very RedHat-centric at the time (this was pre-Ubuntu) and I thought providing RPMs in addition to source tarballs would be a convenience for my users.

>Second, wasn't the Linux Filesystem Hierarchy Standard supposed to solve this by standardizing the system directories?

As you correctly point out, it hasn't yet. And as you correctly point out, this sucks.

>Third, what exactly did you need in those system directories anyway?

Because the worst trouble spot, in my experience, wasn't solvable by /usr/bin/env -- it was actually differing locations of doc directories, rather than of executables, that I kept tripping over.

And as for static binaries -- thanks, I'd rather not lock in whatever bugs might be in the version I built with. (This would especially worry me about security bugs.)

Yes, there's a trade-off here. But I'm fairly sure I came down on the right side of it, because I can't recall being bitten by a version-skew bug at that level.

oiaohm said...

Linux is not alone with the problem of address randomization. There are many windows applications that fail completely when you activate address randomization in windows on them as well.

Please learn most of these problems are not unique to the Linux world. Address randomization killing applications is common to all platforms that have implemented it.

Linux dynamic applications ie -fPIC are not effected. Linux was nice enough to keep the effected applications to something you could run a detect for. Static applications make up a really small number of Linux Applications.

PIC position independent code setting in the complier is need for address randomization to work. C language is not the only language effected funny enough static AOT java was affected as well. Another case of being foolish. C is a high level language. So lots of low level things like addresses are handled.

Most people building static build without PIC. Since it makes there application smaller a lot smaller. Price is compatibility for size and speed.

With stubable dynamic loader from LSB 4.0 there is basically no point to building static since to be compatible you will have to pay the PIC price anyhow.

To work your static executable will be huge. Use lots of ram. Not support plug-ins. Be slow to start. Ie another trick of dynamic means to load parts into memory as you need them. Where with static you have to load whole box and dice. With a complete program you can be looking at loading 200 to 300 meg file into ram if you go static. Yes hard drives kinda cannot do that fast.

Lack of plugins means you cannot segment you application so users could decide not to download the lot but download this or that bit.

Final one is security updates static you have to update the thing as one large block or using binary patching. Errors in binary patching causes nasty failures. The major advantage of a dynamic runtime is that you can release a single update that updates every application you have sent out that uses that .so file.

Reduced file storage requirements is one of the other side effects. To take full advantage of that does required LSB 4.1.

RPM selection of LSB. Package conversion was so the application was clean to uninstall. What is the point of installing if you cannot uninstall cleanly.

Selection if RPM has back fired on LSB as well.

If needed a rpm it can be opened a lot like a tar.gz archive. Its a cpio archive with a different name.

History this is of one those warped evils. Cpio and tar are the same thing at some points in there history. http://cdrecord.berlios.de/old/private/star.html Yes it depends who's tar you got installed. Posix's or gnu's.

GNU's kinda lacks a few functions. Even funnier http://www.gnu.org/software/tar/ It ships as cpio.gz yet it cannot extract them.

Documentation is one area that FHS does not completely cover. http://www.pathname.com/fhs/pub/fhs-2.3.html#USRLOCALSHARE1

Note the optionals.

Please note the publishing date. http://www.ibm.com/developerworks/library/l-lsb.html

The instructions have basically not changed major-ally since then to release on all LSB distrobutions.

LSB 4.0 expands that same kind of operation to all. LSB 4.1 is to provide a better more user friendly installer than what rpm or tgz could ever be.

Simple fact you can have all the advantages of being static ie works everywhere. With all the advantages of being dynamic. Why should a developer pick a weaker solution.

oiaohm said...

By the way address randomization is not about buffer overflows harder. It also makes direct memory injecting into applications from defects in hardware like the firewire or the GPU direct memory write defects harder to pull off.

It is security alteration no matter the language you are coding in. NX alterations are about buffer overflows.

It really would pay to do your homework before commenting on things you don't understand yet Linux Hater's Redux http://en.wikipedia.org/wiki/ASLR

Stack injection attacks are not just buffer overflow attacks. Stack injection attacks can be used against all programming languages. Its a form of universal attack.

Using some of the complete application bug finding software most buffer overflow flaws can be removed from applications.

http://www.coverity.com/ provides a service free of charge for open source developers that finds all known forms of buffer overflow errors you can create plus a lot more security flaws.

Basically if your program is suffering from buffer overflow errors you are lazy or incompetent or a cheep prick not protecting your customers.

Anti-Tux said...

There are many windows applications that fail completely when you activate address randomization in windows on them as well.

Yes, you will likely face problems with Windows static binaries when you explicitly activate it You have to edit a registry variable to do this; otherwise, ASLR will only work with applications that specifically ask for it.

C language is not the only language effected funny enough static AOT java was affected as well.

I was not saying that ASLR could not fuck up programs written in other languages, I was just saying that this protection is not necessary with languages that take pains to prevent code injection.

Final one is security updates static you have to update the thing as one large block or using binary patching. Errors in binary patching causes nasty failures. The major advantage of a dynamic runtime is that you can release a single update that updates every application you have sent out that uses that .so file.

Yes, but bad binary patching is a double edged sword. If a patch breaks your .so, then everything that depended on it will break as well. Your have substituted multiple points of failure with one critical point of failure; that is not a good tradeoff.

To work your static executable will be huge. Use lots of ram.

If you use LSB dynamic linking, and you want your app to work across ALL distros and any conceivable future versions, you will have to include every library your application depends on, anyway. You never know how how some future library update will break backwards compatibility, or how some distro will include some retarded patch to the library that breaks your app. Sure, I guess you could save some RAM by loading the .so's on a distribution-specific basis, but to ensure that your app works, you would have to install EVERY distro and test your app on ALL of them. How does this make the distributions irrelevant? Finally, the speed advantages of dynamic linking are questionable, since you need to manually manage the linking and unlinking of .so's. Static vs. dynamic linking seems very similar to automatic vs. manual memory management. In that case, automatic won.

By the way address randomization is not about buffer overflows harder.

From the Wikipedia page:
Adress space randomization hinders some types of security attack by preventing an attacker being able to easily predict target addresses. For example attackers trying to execute return-to-libc attacks must locate the code to be executed; while other attackers trying to execute shellcode injected on the stack have to first find the stack. In both cases, the related memory addresses are obscured from the attackers; these values have to be guessed, and a mistaken guess is not usually recoverable due to the application crashing.

That seems like a buffer-overflow deterrent to me, especially ones that try to defeat non-executable stacks by returning to libc.

NX alterations are about buffer overflows.

The problem is that you can defeat non-executable stacks by returning to libc. ASLR can help by making the attack depend on chance, but it is no panacea.

It really would pay to do your homework before commenting on things you don't understand yet

I think you are the one who needs to do his homework.

Stack injection attacks are not just buffer overflow attacks. Stack injection attacks can be used against all programming languages. Its a form of universal attack.

Okay, I had never heard this specific phrase before, and, apparently, neither has Phrack. After a quick Google search, I found this (PowerPoint).

Stack Injection
* Stack is used for execution housekeeping as well as buffer storage.
* Stack-based buffer must be filled in direction of housekeeping data.
* Must overwrite the housekeeping data.


Hmm... This 'stack injection' sounds like a plain old, stack-based, buffer overflow attack that overwrites the return address of a function with the address of some shellcode. Remember, buffer overflows can be prevented in languages with automatic bounds checking. Don't believe me? Well, maybe Jon Erickson, the author of Hacking: the Art of Exploitation can change your mind:

While C's simplicity increases the programmer's control and the efficiency of the resulting program, it can also result in programs that are vulnerable to buffer overflows and memory leaks if the programmer isn't careful. This means that once a variable is allocated memory, there are no built-in safeguards to ensure that the contents of a variable fit into the allocated memory space. If a programmer wants to put ten bytes of data into a buffer that had only been allocated eight bytes of space, that type of action is allowed, even though it will most likely cause the program to crash. Tis is known as a buffer overrun or overflow, since the extra two bytes of data will overflow and spill out the end of the allocated memory, overwriting whatever happens to come next.

Using some of the complete application bug finding software most buffer overflow flaws can be removed from applications.

http://www.coverity.com/ provides a service free of charge for open source developers that finds all known forms of buffer overflow errors you can create plus a lot more security flaws.


Sure, Coverity can help, but I bet it can't find everything. Ultimately, programmers will have to rely on SOUND DESIGN PRINCIPLES to right bug-free software. Unfortunately, many of these sound-design principles conflict with the 'open-source ethos.'

Basically if your program is suffering from buffer overflow errors you are lazy or incompetent or a cheep prick not protecting your customers.

Yet they still happen, and they are the second most common security flaw. Why does software suffer from security flaws? Why does Wine suffer from regresssions? Because, real-life doesn't work like that. Those lazy, incompetent programmers could just use a higher-level language and avoid the problems while significantly increasing their productivity. That sounds like the best way to me.

oiaohm said...

If you use the LSB dynamic linker and you ship with all LIBS your application uses your application will not interact with the system .so files.

LSB dynamic linker was created exactly for the problem you are talking about. Distrobution breaking something.

About the future versions bit that comes down to selection of dependencies. Another place where LSB is useful. Hard work is mostly done for you. They have a list of a large section of cross linux stable function calls.

Static linking you are still shipping with large section of the .so files anyhow. If you are using gcc file will not be small due to the fact it don't link time optimize.

Problem here you are dealing with a person who audits security. Stacks are part of cpu design no just some attack who is trying with no local information of the hardware in the game. Phrack does not cover the forms I do.

Stack injection can be done many ways. Some are hardware attacks. Some are debugging attacks and so on. Buffer overflow is not the only one way to do it. Vista and XP security both can be override from a GPU instructions for a direct memory write because critical structs are in fixed locations in memory from a limited user account. No buffer overflow required. Never underestimate the threat of a hardware attack.

To reduce the chance of hardware attacks against OS being dead simple ALSR is required for all key sections of the OS. Microsoft idea of only activating it peace meal leaves there OS a sitting duck. Any device doing DMA with a flaw that can be controlled by the attack to write anywhere can be used in a hardware attack. In the case of firewire this was link to a external machine and fire off attack from there. Some of these flaws virus scanners are basically paper weights they don't get to scan the code doing the attack. In theory a attacker could create a custom card to do it. Address randomization makes these attacks more complex.

Of course ideal is encrypted and checksummed memory on top of ALSR to shut down hardware attacks.

You are working with a flawed model of the attackers world. They don't just attack threw software flaws. Hardware flaws are also fair game.

http://www.coverity.com/ is bounds checking.

Gcc does have build time bounds checking fails due to a flaw in gcc design ie no link time optimization and no examining of .so contents a long side link time optimisation. LLVM blood relation of gcc does a way better job. Gcc also has runtime bounds checking for C can be enabled in gcc.

Please take a closer look in the last 12 months numbers of buffer overflow exploits have been a lot lower in the Open Source world.

The projects with the least numbers of buffer overflows and other security flaws have maintainers using the correct tools.

Some linux distrobutions also have had no buffer overflows security exploits in the last 10 years because they have been using compliers correctly. Yes they responded to attackers methods. For them all buffer overflow events equals program getting terminated no chance to exploit.

Basically if your program is built correctly buffer overflow exploits should never work. If you have done quality control on your program buffer overflows should not be there. Basically two chances to stop them.

C language itself does not define safe guards against flaws. Lot of C compliers do provide them. Using higher level languages than C really does not make a difference if you used complier correctly. Yes a lot of complier makers have read the art of hacking and altered there compliers to fix the issue.

Now this is one critical reason why binary applications need to get out there. Lot of people are incompetent with there complier so don't protect them-selfs against flaws.

Memory allocation is another funny one. You really need to find Portland group complier compares. Not only does it weed out security flaws in C but can also take a single threaded C program and turn it into a multi threaded one. .Net and lot of the other so call safe languages cannot do that effectively either. Memory effectiveness is also extremely high. It is funny when OOP done in C that should be memory bloated compared to C++ is a smaller executable in memory and disk size due to the Portland group complier solving everything it could out.

Quality of your build tools is so key. Defects in the processing stages of a complier equals defects in it security flaw detection. Even runtime compliers like Java and .Net are not above this.

You are aware that most wine regressions are not caused by coding errors. Functions that are stubs replaced with real working replacements are the major cause. Since once the function works application goes and tries to use other functions that don't work and it goes splat. Wine does have a very complete test suit for implemented functions.

Sound Design Principles and lack there of is not limited to open source world.

Any clue how a person in the open source world gets the title "Benevolent Dictator for Life". These are people who are known for there sound design.

MS only left a network back door in there network stack until this year from 12 years ago. Flaw was documented 7 years ago. That flaw allowed complete control of the OS. Guess what it was not a buffer overflow. Some of the most dangerous flaws have nothing to do with buffer overflows.

If you had read the converity white paper on application security closed source does not protect you flaw rate in closed source and open source turned out to be on average worse in closed source. Even the time to fix was slower in closed source.

Also pays to read you licenses of closed source software. You have no legal come back against them and no way to fix it yourself.

Exploits in code design have also been designed against .net applications and other languages that are so called buffer overflow free. Note the words so called. They are only as buffer over flow free as there runtime.

Even worse .net and php applications have the most numbers of security flaws per application on average. Yes maybe buffer overflow free but the coders are more sloppy because they think they can get away with it.

Apparently you have been hearing myths. There is no magical language that solves all security flaws. Its even worse if coder thinks they are safe when they are not.

Really first thing you really need to do is write a list of everything you think is a Open Source problem then double check it.

Scary as it sounds very rare things are unique to the Open Source world. Most so called defects people point at open source are complete software industry defects. Lots of cases way worse in closed source due to lack of risk of programmer being publicly black balled for incompetence.

Anonymous said...

http://www.reghardware.co.uk/2008/11/13/arm_ubuntu_netbook_tie_in/

Someone should get ARM a clue. Nobody wants Linux.


http://www.theregister.co.uk/2008/11/14/sun_pimpin_openoffice/

I can hear the lusers scream! Comments are required reading.

Anonymous said...

http://www.theregister.co.uk/2008/10/20/linuxsb_4_beta/

WTF!? Still beta!? When does this famous solve-all-problems 4.1 come out? Another 10 yrs?

Anti-Tux said...

Of course ideal is encrypted and checksummed memory on top of ALSR to shut down hardware attacks.

According to Wikipedia, there are several ways to deal with the Firewire issue: map virtual memory to firewire physical memory, disable the OHCI interface, disable or remove Firewire.

You are working with a flawed model of the attackers world. They don't just attack threw software flaws. Hardware flaws are also fair game.

To hackers, everything is fair game. If an attacker can gain physical access to hardware, then NO operating system will help you. The problem with hardware attacks is that they require physical access to the device. If you have to gain physical access to a device, it is probably easier to Socially Engineer information out of someone.

ou really need to find Portland group complier compares. Not only does it weed out security flaws in C but can also take a single threaded C program and turn it into a multi threaded one.

Sweet, is there an open source equivalent, because I do not have $300 to blow on a compiler to test it?

oiaohm said...

LSB 4,1 is 12 months off max. If everything goes well 6 months.

4.0 is end this month. So far everything is on time table. If you look at LSB releases 6 to 12 months is normal cycle.

Firewire one disabling driver nice and painless. Now having to do the same with GPU is ouch with effected cards.

Gets a little more complex that that some bios active the Firewire so they can boot from it. If your machine has defect in there fixing bios or new machines are the only option. Since once the memory access is activated it can block turning it off. Disable the OHCI interface and firewire interface if you can should be there. Something to be aware of handling machines with the defective firewire.

Not all hardware attacks require physical access. Some like the GPU attack only require access to talk to the GPU of effected cards. Same with firewire some devices can have there firmware altered to do the attack. Yep defective firewire + a device that can have its firmware altered to do attack and attacker finds it your are done without some other levels of protection.

Another myth hardware exploit always needs direct access. It all depends on what is exploitable. I here this myth taught over and over again. When I do a demo of a hardware exploit performed remotely kinda wakes them up. Chain of failed devices leading to complete control of the machine taken remotely without ever using a existing security flaw in the running OS. Instead injecting own.

Hardware exploits based on CPU flaws can also be major headaches.

Portland Group Complier is one of the best of the bread of compliers.

They only get away for charging for it because there is nothing out there equal to it. llvm is about the best in the open source world. Multi threading has to be done manually with it.

This is my point I am prepared to pay for security and performance where required. There are a lot of companies out there who don't.

Before you start complaining about gcc MSVC is not anywhere near Portland groups complier either. MS gives it way for free because its basically second rate trash compared to the good commercials out there.

Yes attacking gcc for being a poor excuse of a complier is kinda fair if you attack the correct points.

It is only partly true that you get what you pay for. Some things are worth paying for.

From a security point of view Windows is worth -2000 dollars at best.

Some of the harden Linux's and BSD's and Solarias are worth a few thousand from a security point of view.

Issue be aware there are crap distrobutions of Linux and bsd out there as well from a security point of view. Totally insecure linux yes a special distribution created to show how far a person could screw up Linux security if they tried is about -100 000 (It even remote logs in as root without a password). But you would have be nuts to run that one on a production system.

People really don't value the quality of what they are getting very well. Users reward interface over security then wonder why there OS's security sux's.

PS ARM processors are found inside Dell laptops. Reason 4 times the battery life running a ARM cpu compared to a x86 of the same speed.

Anonymous said...

Having done multi-day upgrades and hours-long program installs a couple times I have come to the conclusion that installing from source is a fools game.

Unless you're a geek who can't get enough of that sort of thing, or can't find a binary and absolutely must have the program I just don't see the benefits outweighing the cost in time and aggravation.

I have other things to do in life rather than wasting time lewdly inverting the master/slave relationship between administrator and computer.

Can't comment on the rest. I'm a user, not a code walloper.

thepld said...

" Having done multi-day upgrades and hours-long program installs a couple times I have come to the conclusion that installing from source is a fools game. "

Amen. Gentoo is the distro that really disillusioned me with Linux to begin with. Nothing like dealing with broken packages that can hose your entire system because a maintainer decided to mark something as stable without testing it first.

Anonymous said...

"WTF!? Still beta!? When does this famous solve-all-problems 4.1 come out? Another 10 yrs?"

They would have started from scratch 3 years after, because it would have been "deprecated"

Their mommies would have kicked them out of home because they need more room in the basement :P

Anti-Tux said...

irewire one disabling driver nice and painless. Now having to do the same with GPU is ouch with effected cards.

Not on a server it's not.

Not all hardware attacks require physical access. Some like the GPU attack only require access to talk to the GPU of effected cards.

Sorry, I forgot about your little GPU exploits. What I should have said was that hardware attacks require local access (i.e. you have to be able to run a program on a machine). This means that you either have to access the machine physically or you have to convince someone (through SEing) to run your exploit on the machine. In the first case, it is no different than any software privilege escalation exploit. In the latter case, it is no different than tricking them into running Sub7 or BO2K (or any of the commercial Remote Management Software that does not get flagged as malware for legal reasons). Sure, you can still exploit flaws in network cards, but how many of those are there? They do not have to be programmable (with the exception of those graphics-network combo cards for running remote 3D X11 applications), so that cuts down a major source of complexity.

Ultimately, a lot of these problems could be solved by requiring better drivers. Microsoft has done by requiring signed drivers in Vista.

oiaohm said...

Run a java or .net application from a web site using gpu is enough to do the gpu exploit. User don't even have to install on effected systems.

Local access is too simple to get to exploit hardware flaws. Hardware exploits make stuff like Java and .Net dangerous.

Signing is not a solution when MS signs root kits because they provide game copy protection. For high security systems means to do a code audit is required. So really the true solution is open source drivers if you want security. Black box's that cannot be audited so you have to operate on trust is not acceptable.

How to prevent buffer overflows I really should describe so you can see how stupid this is.

As part of most OS memory management they use copy on write to save on ram. Minor extension to this kills buffer overflow exploits off.

First you change it from copy on write to a read copy update setup. So you go to write to a block of memory it gets copied then your write put on it. Then it gets updated into ram stack.

Final stage in code alteration is add a Audit stage between the copy and update stage using information the complier tagged in.

No more buffer overflows forming exploitable events. I cannot patent this because it was only first documented in 1978.

C and C++ allows operating without rules what is handy for embedded work yet people forget OS can define the rules. Same systems are used in type based asm http://www.cs.cornell.edu/talc/ .

So yes you can even write code in a slightly altered form of asm and not be able to suffer from buffer overflows either.

Call it the myth of buffer overflows. As long as people believe it OS's don't have to remove it. Removing it does come at a minor performance cost.

NX extension was adding the one missing bit x86 protected memory system lacked. It already had read only write only and blocked from access options and raising exceptions. What is really enough to kill off all buffer overflows.

Working buffer overflows on any x86 cpu later than and including the 286 is a sign of poor design.

ALSR really has nothing to do with fixing buffer overflows. You don't need it to fix buffer overflows.

ALSR is injection prevention. Injection presumes you are more often than not have access to the machine.

Injections are a complete different class of attacks to buffer overflows. Injections are more often than not privilege escalation exploits and direct code insertion attacks like in software mode using debugging to alter the code of a application. Of course you can disable users from doing the debugging alteration but if hardware allows them to do the same thing ALSR slows them down. That is what ALSR is trying to make hard to inject.

As what was documented ALSR on 32 bit machines is most cases completely not effective against buffer over flow exploits. On top was only minorly effective against buffer overflows on 64 bit machines. There are far better solutions.

Software base injection ALSR makes no difference if user is using OS debugging systems.

When it come to what ALSR was built for hardware attacks it increased the complexity of the attack. This includes attacks from GPU, Firewire, CPU mode breaking... There are a long list of exploitable hardware combinations.

Truly to prevent this class really required checksums and encryption on memory. Without CPU support these will kill machine performance.

So next time you hear saying that ALSR has anything to do with buffer overflow attacks preventing buffer overflows they are fatality wrong.

Yes there are exploitable network cards out there. Also not that rare. One fault in network cards has not be fixed in years. Enable boot from network most motherboards boot from network system have no way of validating the system it just downloaded. First dhcpd to answer it gets to set the software it runs basically. Also be aware lots of motherboards ship with that enabled by default.

There are a lot of really bad security exploits out there hidden in hardware some we cannot do much against. ALSR is about the ones we can. So yes the computer sitting in front of most people does have hardware exploits.

shevy said...

Actually Gobolinux solves this problem much better than LSB/FHS ever will.

"If you were looking for applications, couldn't you just use /usr/bin/env? I am sure that is present on any Linux distro worth mentioning. If you were looking for libraries, then maybe you should statically compile your program."

/usr/bin/env on Gobolinux does not "exist" as such (well actually it does, but bear with me for now) other than it is a symlink to the real location of env, which is inside an AppDir like directory called /Programs/CoreUtils/6.7/bin/env

This model works for all the software, it works a lot better than the old FHS way and NO BIG DISTRIBUTION HAS EVEN ATTEMPTED TO USE IT.

One should ask why. It is a technically better way to handle your computer, it gives many advantages compared to the default fuck-up model of FHS but it will not be considered by the big distributions.

Personally I only chuckle whenever I read people who think that the LSB will solve things.

It seems they are all trapped in an ancient mindset.

oiaohm said...

Gobolinux /programs is basically the same design as the LSB /opt directory.

For some reason GoboLinux users think the idea they copied is such a new idea. The LSB /opt directory existed before there /programs directory.

So don't make me laugh. Symbolic/Hard linking forming the FHS is permitted by LSB. Since no where in LSB does it say FHS have to be real files.

FHS is required in some for as part of posix compatibility. As GoboLinux found out the hard way. There are many programs out there that depend on its existence. One of the early forms of GoboLinux tried not having a FHS. If they had talked to the LSB first they would never had to find out the hard way.

Ancient mindset. Sorry LSB is Ancient compared to GoboLinux copy. Yet with age becomes wisdom.

Early on LSB tried to force ideas on Distributions and learned a hard lesson. Basically you cannot. The force was trying to get all Distributions to use RPM effectively losing over half of its members. Damage control has been done ever since it is the biggest mistake in the complete history of the Linux Standard Base.

LSB cannot directly force distrobutions to do anything was what was learnt. So since have be working with the source projects to work to a merge between the distrobutions without them knowning.

Now the final day of over 8 years of work is almost here. Distributions being close enough even without knowing it to allow one binary to run on all distrobutions.

If GoboLinux wants to be LSB certified it could be.

GoboLinux is failing to sell there design that is not LSB problem. Simple lack of good sales people. Complaining about the LSB when you don't understand it kinda does turn other Distributions off.

Of course LSB signed up Distributions are not going to touch your design if you don't show it passing LSB certification. When there is nothing in GoboLinux design that is really stopping that. Its really all in your court.

Anonymous said...

"LSB cannot directly force distrobutions to do anything was what was learnt"

That's the key point. Don't you get that it is the lack of vision and control that is killing Linux?

oiaohm said...

Note the words directly force. I said nothing about indirect force.

Forms of indirect force including working with KDE and Gnome to implement the features LSB wants as long term standards.

Yes there is vision. Yes there is control. Neither is truly lacking. Lack of both would have seen LSB dead before now.

Linux Standard Base made a lot of mistakes but has truly learnt how to do it.

Just due to methods that have to be used it kinda cannot turn on a dime and are not exactly fast.

Yes it would have been faster without the overhead of having to build momentum. Also would have been faster if did not have to use progressive stacking either.

Some of it you don't want to scare distrobutions off either. So if lets say 7 years ago LSB just jumped out with a stand alone system we would have got no support out of distrobutions and would have basically died.

Now we setup a stand alone system in stages less spooking of the distrobutions that support LSB.

Slow progress is still progress. That is the hard part for most of you to get.

LSB 1.0 used system provided .so files.
LSB 2.0 got first own dynamic loader in under the words so distrobutions could provide correct .so files independent to there main distribution so LSB applications worked.
LSB 3.0 got application independent to system .so files in.
LSB 4.0 gets linked in dynamic loader
LSB 4.1 addes installer to match.

Was this outcome all planed from the start no. Because we hoped supporting distrobutions would ship always with LSB dynamic loader. Then did not so LSB 4.0 is kinda revenge. Indirect pressure again. If distrobutions don't do what is required LSB gives it to the ISV's.

LSB 3.2 with printer drivers was also under handed. Since the parts needed could be shipped independent the printer driver development kit was developed to do exactly that. So LSB 3.2 printer drivers work on LSB 3.0 distrobutions without printer driver support.

Now if between now and 4.1 distrobutions produced something that meets ISV needs (ie where LSB wants to go) They can change the path.

Same has been going on with the graphical environment.

LSB 1.0 console only api. Looks fairly harmless.
LSB 2.0 Basic X11 support
LSB 3.0 GTK toolkit and QT appears.
LSB 3.2 means to add start menu entires and a few other things.
LSB 4.0 Improvements.

Yes faster would have been better. Yet same kind of progress is going to keep on going. Some of these API lacked good enough test cases and so on.

Most of the hard work is done.

LSB means to change things faster comes with the upcoming alterations.

Yes its been the tortoise and the hare. Except in this case trying to be the hare equals you end up being road kill for not looking both ways and seeing the tunnel under the road avoiding the traffic.

There is really no need to directly force the distrobutions to do anything to control the future of Linux.

LSB being neutral also has the advantage that if a distribution blood bath breaks out it can just sit back and eat popcorn and wait for the winners.

That was there other important lesion learnt. You cannot control the future of Linux if you give any bias to a particular distrobutions or group of distrobutions over any others.

Most non distribution Open source projects like KDE, Gnome, openoffice x.org, linux kernel.... really don't care about distrobutions either. Those projects are truly setting the future of Linux.

Control of the true leaders controls the true outcomes.

Anonymous said...

Yes there is vision. Yes there is control.

You just want to believe that. Open your eyes. Be smarter that the rest of the flock and don't waste your life.

Anonymous said...

don't waste your life.

I think that you take linux too seriously. It's just a tool, nothing more.

I mostly agree with oiaohm, it would be ideal if every distribution would be based on a standard one, so that no effort is wasted. This is the main issue that is holding linux back in my opinion. I just hope that LSB will fix things in the long term. In the meantime, the opensuse buildservice looks like a good idea.

Anonymous said...

I think that you take linux too seriously. It's just a tool, nothing more.

Of course. That's precisely the reason why I don't waste time trying to fix a broken tool, when tools that are not broken already exist.

Try to understand: many of us have been around for many years. We've all heard the promises. We've all read the hype. The fact is, change is not coming. The Linux community knows how to blame others, but doesn't know how to change. It doesn't even know it should change.

Anonymous said...

That's precisely the reason why I don't waste time trying to fix a broken tool, when tools that are not broken already exist.

I initially chose linux because I like tinkering with it. I still do. I also know what parts are under development and what parts are stable, so as to avoid any nasty surprises.

Try to understand: many of us have been around for many years. We've all heard the promises. We've all read the hype.

I don't see what this has to do with anything. I like linux as it is at the moment, I expect some things regarding driver and ISV support to be fixed shortly to make other people's life easier, and linux keeps gaining marketshare. What's the problem?

oiaohm said...

Problem is I am smarter that what lot of people looking at what is going on.

Best way of thinking of Linux/Open source.

Distrobutions are like penguins sitting on a large ice berg of open source.

The ice berg is controlling where they are truly going. Yet a lot of the penguins have come to be belief that they are important and dress self different to the rest.

Only penguins working on making the iceberg more better for all by altering the iceberg are having any real effect on where the ice berg is going.

LSB targets the iceberg more than the Distributions. But in the process of working on the iceberg have to be careful not to be killed by the penguins.

The soon days of the balancing act between the will of Distributions and what LSB wants to do can end.

Next problem is that the Linux Distrobution model was planed to be a war. Problem is we have had peace. Survival of the fittest. LSB has been required because that did not happen.

Novell attacking Redhat recently is what we need more of.

We've all heard the promises. We've all read the hype. The fact is, change is not coming. The Linux community knows how to blame others, but doesn't know how to change. It doesn't even know it should change.

You have not been working at the front edge. Most of those hype events are were the lessons of LSB come from.

Like 2002 tried to force use of RPM on everyone. Media took that and run with it that Linux world was going to change massively. Yes an action that was meant to save Linux almost completely destroyed the LSB. Lot of the promises you talk of are times of pain for LSB.

Lack of knowledge of how to do it was the problem with LSB. No one had done it before so yes screw ups happened. People got promised stuff because people kept on thinking this would work like Microsoft changing something and distrobutions would just follow. Simple fact they don't.

I still hear people saying you just should apply force. That will not work at all.

Distrobutions don't know how they should change or unwilling to take change. This is not true for the projects behind them. KDE 4.0 being pushed out by distrobutions too soon to users was partly to undermine what its really up to. Lot of Linux Distributions want KDE 4.x line to fail. Universal applications with no platform dependence kinda undermine there control.

Media are sheep. Users have basically been used as sledgehammers against KDE 4.x line because its not what the Distributions want.

Ok KDE 4.0 is not feature rich yet. So what. Not like KDE 4.x developers said it should be used in general usage. Is where KDE 4.0 going required. Yes. What should have been celebration of a Open Source project doing the right thing has become a never ending attack. Its not like doing the right thing is painless all the time.

Every time a group does something to improve the future of open source they have to watch there backs against Linux Distributions. Other OS's with Distributions open source projects don't have that much problems with. If something is marked for developers only on them that is were it stays.

The failure to understand that Distrobutions are sometimes the best friend of Open Source projects and sometimes the worse enemy. Guidance of distrobutions is rarely about the greatest good for all.

Dbus is the next core tech after packaging. I will layout the next tech tree.

Dbus Leads to.
Policy kit that is a security system independent to distrobutions that can allow system configurations alterations in a highly secure way from normal user applications. End of need for the root account also gets rid of all those annoying sudo boxes.

akonadi universial pim information storage and interface with server plugins. Yes the means to change between evolution and kontact and any other support client in real time. Even better use both side by side without causing stuff up.

mpris universal media player control threw dbus.

Telepathy shared im and voip interface so two programs trying to using them don't end up logging in twice and both programs still function.
Ok there is a lot more on the second level.

Lot of applications conflicts get killed off. Distrobutions no longer being the only thing in town providing system configuration system. Yes world is kinda going to go upside down.

Good thing about Dbus is once its there good section of its second level of alterations don't need Distrobution support. If LSB could use brute force Dbus version 1.0 would be in all distrobutions in 4.0. But LSB does not have that power. So we have to do what we have done in the past. Get kde, gnome and others using it so if the distrobutions want to use the latest forms they have to include it.

Indirect force. Slow but long term effective.

Issue here is the foolish idea that there is not a plan. That LSB and the Freedesktop project don't know where they are heading. They do. There is a Plan. It ends in Distrobutions not being important. Then they will have to live with there true status.

oiaohm said...

PS LSB/Freedesktop plan B for DBUS is embed it in Linux kernel. We currently hope plan A works.

Anti-Tux said...

This is not true for the projects behind them. KDE 4.0 being pushed out by distrobutions too soon to users was partly to undermine what its really up to. Lot of Linux Distributions want KDE 4.x line to fail. Universal applications with no platform dependence kinda undermine there control.

Ooooh! It's all a conspiracy against KDE! Of course, all the apps that cared about platform independence probably switched to Java long ago, since its "Write Once; Run Anywhere".

Ok KDE 4.0 is not feature rich yet. So what. Not like KDE 4.x developers said it should be used in general usage.

They gave it a 4.0, didn't they? Did they really think writing a bunch of posts on blogs nobody reads would account for that? If they really had wanted to issue a developer preview release, they should have released something with a name to make their intentions clear, KDE 4 Developer Preview or something. KDE is not a library; it is a desktop environment. People do not use it because it implements function foo() for use in application bar. People use it to do desktop things: launch graphical programs, browse computer file graphically, look pretty, etc. When a major release of a desktop system comes along, people WILL want it unless it is made explicitly clear it is an Alpha or Beta. The fact that you can't seem to understand this makes me think that the whole freetard movement is doomed.

Is where KDE 4.0 going required. Yes.

Yes, because it is not like the GUI has changed much in 20 years, so obviously we need a new implementation every 3 or 4.

What should have been celebration of a Open Source project doing the right thing has become a never ending attack.

Because rotating icons and a new widget engine are so necessary that we have to sacrifice binary/backwards compatibility. I mean, it is not like they could have just implemented a new widget engine on top of the KDE-3.5, nosiree!

Anonymous said...

I mean, it is not like they could have just implemented a new widget engine on top of the KDE-3.5, nosiree!

Qt 3 lacked the widgets-on-canvas ability, which is implemented in Qt 4.x. Lots of KDE4 criticism is unwarranted. The fact is that KDE3 had lots of problems that were solved with the transition to Qt4, also it was an opportunity to rewrite some of the KDE core to fix long-standing issues. The main problem with KDE4 is that it was pushed by the distributions without proper warnings to the end-users. I understand that this was done to avoid supporting both KDE3 and 4 for a long time, however it caused lots of bitterness and aggravation among users, who were basically beta-testing it without their knowledge.

oiaohm said...

KDE 4.0 Developer Preview

Was the full name of KDE 4.0.

Anonymous said...

http://www.kde.org/announcements/4.0/

Now be so kind and tell me where the official announcement states that KDE 4.0 is a developer preview.