Check out the Anti-Tux for new posts.
Go to my new blog at http://antitux.blogspot.com
GC solves portability issues because programs written in languages such as Java, C#, Python, etc. are no longer compiled for any specific processor. By comparison a C/C++ executable program is just a blob of processor-specific code containing no information about what functions and other metadata are inside it.
If all code written for the Macintosh was written in a GC programming language, it would have been zero work for Apple to switch to the Intel processor because every program would just work!
Apple’s second kernel wasn’t built from scratch, but is based on Berkeley Software Distribution (BSD) Unix code. This code is a lot like Linux, but with a smaller development community and a noncopyleft license agreement. That Apple is depending on a smaller free kernel community, and yet doing just fine, does say something about free software’s ability to deliver quality products. The BSD kernel is certainly much better than the one Apple threw after 20 years of investment!
Unfortunately, in choosing this software, Apple gave life support to a group who should have folded their code and resources into Linux. The free software community can withstand such inefficiency because it is an army of millions, but, from a global perspective, this choice by Apple slowed progress.
When I visit coffee shops, I increasingly notice students and computer geeks purchasing Macs. Students have limited budgets and so should gravitate towards free software. If Apple doesn't support free software, their position in the educational market is threatened.
Many computer geeks buy a Mac because of its Unix foundation.
In the terminal window of both the Mac and Linux, you type “ps -a” to see the list of processes.
(Windows doesn't support the Unix commandline tools.)
Apple has good Unix compatibility only because their programmers never took it out while doing their work. It was never any goal of the Mac OS-X to be appeal to geeks — Apple just got lucky.
After having been a long-time Windows user, and a 100% Linux user for 3 years, I tried out the Mac OS X for a couple of days. Here are some impressions:
● A Mac OS has more code than ever before, and a lot of it is based on free code, but it doesn't have a repository with thousands of applications like Linux. There are several third party efforts to provide this feature, but none are blessed or supported by Apple. The Mac comes free with iPhoto, but they really want me to buy Aperture for $159, which they tell me just added 100 new features! Apple ships a new OS every year, but you don't get free upgrades — it costs $140 even to upgrade from OS X 10.4 to 10.5.
● Many of the Mac's UI details like how to maximize windows, and shortcut keys, are dis-similar to Windows. Linux, by contrast, feels much more natural to a Windows user. Every time you double-click on a picture, it loads the iPreview application that stays around even after the window displaying the picture is closed. How about just creating a window, putting the picture in that window, and having it all disappear when I close the window? I tried to change the shortcuts to match the Windows keystrokes, but it didn't change it in all applications.
● The Mac feels like a lot of disparate pieces bolted together. The desktop widgets code has its own UI, and it doesn't integrate well into the OS desktop. The Spaces is a clone of an old Unix feature and doesn't implement it as well as Linux does. (For example, there is no easily discoverable way to move applications between spaces.)
● As mentioned above, the Mac doesn't support as many of the Microsoft standards as Linux does. One of the most obvious is WMA, but it also doesn't ship with any software that reads DOC files, even though there is OpenOffice.org and other free software out there.
● It is less customizable. I cannot find a way to have the computer not go to sleep when the laptop screen is closed. The mouse speed seems too slow and you can only adjust the amount of acceleration, not the sensitivity. You cannot resize the system menu bar, nor add applets like you can with Linux's Gnome.
So I’m unimpressed. Ubuntu already has the majority of those features (or a close-enough analogue), that guy failed miserably in doing his homework before posting that, and even the things that Ubuntu doesn’t have are Linux/GNOME/KDE/Nautilus/Dolphin deficiciencies, not Ubuntu problems.Yes, that loser totally did not do the proper fifty hours of research to make Linux do what he wanted. He just sat down and expected it to function properly, the moron! Plus, all those problems are the fault of the ISVs not the fault of the distro, whose job it is to take all the various pieces of software and integrate them into a polished, cohesive whole. The freetard is strong in this one!
If you've ever used GNU/Linux, chances are good that you've used bash. Some people hold the belief that using a GUI is faster than using a CLI. These people have obviously never seen someone who uses a shell proficiently.Yes, I have had to use Bash. Yes, I also regret all the time I spent learning to put up with its bullshit.
Or you can write a progam in C that traverses the ram, writes it to a file and then use a hex editor (Emacs in hex mode, for example M-x hexl-mode
to look through that.
Myth 1: Linux is too difficult for ordinary people to use because it uses only text and requires programming.
The truth: Although Linux was originally designed for those with computer expertise, the situation has changed dramatically in the past several years. Today it has a highly intuitive GUI (graphical user interface) similar to those on the Macintosh and Microsoft Windows and it is as easy to use as those operating systems.
No knowledge of programming is required.
Moreover, once people become familiar with Linux, they rarely want to revert to their previous operating system.
In some ways Linux is actually easier to use than Microsoft Windows.
This is in large part because it is little affected by viruses and other malicious code
system crashes are rare.
Myth 2: Linux is less secure than Microsoft Windows because the source code is available to anybody.
The truth: Actually, Linux is far more secure (i.e., resistant to viruses, worms and other types of malicious code) than Microsoft Windows. And this is, in large part, a result of the fact that the source code (i.e., the version as originally written by humans using a programming language) is freely available. By allowing everyone access to the source code, programmers and security experts all over the world are able to frequently inspect it to find possible security holes, and patches for any holes are then created as quickly as possible (often within hours).
Myth 3: It is not worth bothering to learn Linux because most companies use Microsoft Windows and thus a knowledge of Windows is desired for most jobs.
The truth: It is true that most companies still use the various Microsoft Windows operating systems. However, it is also true that Linux is being used by more and more businesses, government agencies and other organizations. In fact, the main thing that it preventing its use from growing even faster is the shortage of people who are trained in setting it up and administering it (e.g., system engineers and administrators).
Moreover, people with Linux skills typically get paid substantially more than people with Windows skills.
Myth 4: Linux cannot have much of a future because it is free and thus there is no way for businesses to make money from it.
The truth: This is one of those arguments that sounds good superficially but which is not borne out by the evidence. The reality is that not only are more and more businesses and other organizations finding out that Linux can help reduce the costs of using computers, but also that more and more companies are likewise discovering that Linux can also be a great way to make money. For example, Linux is often bundled together with other software, hardware and consulting services.
Myth 5: Linux and other free software is a type of software piracy because much of it was copied from other operating systems.
The truth: Linux contains all original source code and definitely does not represent any kind of software piracy.
Rather it is the other way around: much of the most popular commercial software is based on software that was originally developed at the public expense, including at universities such as the University of California at Berkeley (UCB).
Myth 7: There are few application programs available for Linux.
The truth: Actually, there thousands of application programs already available for Linux and the number continues to increase.
Myth 8: Linux has poor support because there is no single company behind it, but rather just a bunch of hackers and amateurs.
The truth: Quite the opposite: Linux has excellent support, often much better and faster than that for commercial software.
There is a great deal of information available on the Internet and questions posted to newsgroups are typically answered within a few hours.
Moreover, this support is free and there are no costly service contracts required.
Also to kept in mind is the fact than many users find that less support is required than for other operating systems
because Linux has relatively few bugs (i.e., errors in the way it was written) and is highly resistant to viruses and other malicious code.
Myth 9: Linux is obsolete because it is mainly just a clone of an operating system that was developed more than 30 years ago.
The truth: It is true that Linux is based on UNIX, which was developed in 1969. However, UNIX and its descendants (referred to as Unix-like operating systems) are regarded by many computer experts as the best (e.g., the most robust and the most flexible) operating systems ever developed.
They have survived more than 30 years of rigorous testing and incremental improvement by the world's foremost computer scientists, whereas other operating systems do not survive for more than a few years, usually because of some combination of technical inferiority and planned obsolescence.
Myth 10: Linux will have a hard time surviving in the long run because it has become fragmented into too many different versions.
The truth: It is a fact that there are numerous distributions (i.e., versions) of Linux that have been developed by various companies, organizations and individuals. However, there is little true fragmentation of Linux into incompatible systems, in large part because all of these versions use the same basic kernels, commands and application programs.
Rather, Linux is just an extremely flexible operating system that can be configured as desired by vendors and users according to the intended applications, users' preferences, etc.
In fact, the various Microsoft Windows operating systems (e.g., Windows 95, ME, NT, CE, 2000, XP and Longhorn), although they superficially resemble each other, are more fragmented than Linux.
Moreover, each of these systems is fragmented into various versions and then further changed by various service packs (i.e., patches which are supplied to users to correct various bugs and security holes).
Myth 11: Linux and other free software cannot compete with commercial software in terms of quality because it is developed by an assorted collection of hackers and amateurs rather than the professional programmers employed by large corporations.
The truth: Linux and other free software has been created and refined by some of the most talented programmers in the world
Moreover, programmers from the of the largest corporations, including IBM and HP, have, and continue to, contribute to it.
Myth 12: Linux is free at the start, but the total cost of ownership (TCO) is higher than for Microsoft Windows. This has been demonstrated by various studies.
The truth: A major reason (but not the only one) for Linux's rapid growth around the world is that its TCO is substantially lower than that for commercial software.
(1) the fact that it is free
(2) it is more reliable and robust (i.e., rarely crashes or causes data loss)
(3) support can be very inexpensive (although costly service contracts are available)
(4) it can operate on older hardware and reduce the need for buying new hardware
(5) there are no forced upgrades
(6) no tedious and costly license compliance monitoring is required.
A major reason provided for the supposedly higher TCO of Linux is that Linux system administrators are more expensive to hire than persons with expertise in Microsoft products.
Here's the idea: All PC Games should first be built to work with the GNU/Linux Universal Operating System.
All PC Games should first be built to work with the GNU/Linux Universal Operating System.
The game would simply have an installer that would install GNU/Linux on the host platform and to enable the gamer (sic) to be played on the host. An example of this ... is ... called wubi (Windows-based Ubuntu Installer). The wubi enables users to install GNU/Linux as a program into the Windows OS.
Since GNU/Linux is Universal, this could open up the game to just about any platform because the user would simply use the game installer to install GNU/Linux along with the game to their system.
Running games in this fashion would put an end to the need for PC game makers having to port their games to different host Operating Systems because all games would be built to work in the GNU/Linux Universal Operating System.
Using this type of system would revolutionize the PC gaming industry, and broaden the market for the game because it could run on many different types platforms. Increasing the availability of the games would equate to increased sales of the games.
It's sort of like the example of RAMBUS RAM vs. SDRAM. Since SDRAM was a more open standard than RAMBUS, more hardware mfgrs were able to make SDRAM and so it became cheaper and more widely used to the point that it snuffed out RAMBUS alltogether.
Another example would be Henry Ford's mentality of making cars more affordable and selling many more cars than when they were only available to the rich.
This method of making games would also help to protect gaming systems from becoming obsolete, which would be beneficial for both the gamer and the game maker.
I like what you are doing. We Linux geeks need a dose of honesty and reality in order to improve. Public humiliation is sometimes effective, but we Linux geeks are good at disregarding the opinions of the ignorant masses.
My favorite Linux issue: the secret hidden error messages that many Linux apps produce (or not).
When some Linux app suddenly disappears from the screen, I normally just utter "fucking Linux" and start it up again. But sometimes I have the temerity to actually go looking for the problem in the numerous error log files. This is usually a waste of time, because one of the following is true:
- There is no message, at least not in any of the places I know where to look, or findable within the time I am willing to spend.
- Something that may be relevant can be found, but the message is incomprehensible (probably a leftover debug trace from a programmer).
- There are hundreds of messages in the log file (with no time stamps) and I give up trying to find anything relevant to my problem.
A window manager like Gnome should pop up a message box anytime some app writes to stderr or gets a segment fault. This is apparently too much of a bother for the Gnome geeks to implement. After all, they already know where to look when things go wrong, or they always run their apps from a terminal window so they can see stderr outputs and other debug traces.
Here is the (hidden) .xsession-errors file from my current session. I can see why it is hidden and why the Gnome geeks do not want this shit popping up in my face: it is too embarrassing.
/etc/gdm/Xsession: Beginning session setup...
Setting IM through im-switch for locale=en_US.
Start IM through /etc/X11/xinit/xinput.d/all_ALL linked to /etc/X11/xinit/xinput.d/default.
Window manager warning: Failed to read saved session file /home/mico/.config/metacity/sessions/10cd57d1a46f90bd05123554431253037400000057900018.ms: Failed to open file '/home/mico/.config/metacity/sessions/10cd57d1a46f90bd05123554431253037400000057900018.ms': No such file or directory
Failure: Module initalization failed
** (nm-applet:5933): WARNING **: No connections defined
seahorse nautilus module initialized
Initializing nautilus-share extension
Initializing diff-ext
(gnome-panel:5926): Gdk-WARNING **: /build/buildd/gtk+2.0-2.14.4/gdk/x11/gdkdrawable-x11.c:878 drawable is not a pixmap or window
** (nautilus:5927): WARNING **: Unable to add monitor: Not supported
javaldx: Could not find a Java Runtime Environment!
(soffice:6243): Gtk-WARNING **: GtkSpinButton: setting an adjustment with non-zero page size is deprecated
Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory
Please ask your system administrator to enable user sharing.
The following subsystems have not been implemented yet or show some limitations:
- 3D acceleration is only implemented on R5xx and RS6xx upto now. Also no XVideo on newer chips (needs 3D engine for scaling). Still, fullscreen video is working fluently with shadowfb for many users. An experimental 3D bringup tool is now available for testing.
- No TV and Component connector support so far.
- Suspend & Resume isn't completely tested, but works on a variety of hardware. Your mileage may vary. Note that typically you need some BIOS workarounds on the kernel command line, ask your distribution for that.
- No powermanagement yet. Depending on your hardware, the fan might run at full speed. This turned out to be really tricky.
The following paragraphs are from a rant an anonymous contributor sent to me. To comply with his request, I have edited it where I deemed appropriate.
There are two major problems that Linux faces concerning its spread on the desktop:
1.) Applications
2.) Drivers
Both problems will not change in the near future, so 2009 will not be the year of Linux on the desktop. Neither will 2010. But what is the real problem? The real problem is ignored by those who are "in charge". What do I mean with that?
1.) Applications
Desktop-Users need commercial applications. That's just the way it is. The extra ten percent of features that makes an app usable for your average Desktop-User are the 10% that every developer hates; those features are hard and boring to develop, and implementing them is just no fun. You need to pay developers to implement them.
Do you really think that something like Photoshop Elements is going to be created by the community? My father, whose hobby is photography, shelled out 70€ for PE. He does not regret it, even though the activation is a PITA. Why? It just works: it works with his camera; he gets results fast; there are a bunch of tutorials and books available, etc.
In Linux we are stuck with Gimp. Sorry, but no cigar! PE calibre software will NOT be created by the community. It just takes TOO much manpower, TOO much work. No one is coding that in his spare-time.
This kind of software will also not be created by an open-source company. There is no business-model that would supports the effort. Just imagine if Adobe released PE as open-source - you think that people would still shell out 70€ for a boxed-version? Nope, people would just copy it. There are some exceptions. For example, Mozilla receives their money from Google not from their users.
Sun supports OpenOffice, but it still has issues. The spell-checker sucks even in the 3.0 version (at least for German). BTW, you can buy an add-on for OpenOffice: the Duden-Spellchecker. It is closed source and costs approximately 25€. Apparently, Sun bundles it with StarOffice, so if you buy a boxed version of StarOffice, you will have a proper spell-checker.
See? That is another typical example of the difference between open and closed source software. Will a great spell-checker be created by hobbyists in their spare time? No! It is a repetitive, boring task, and coding skill alone does not suffice. You also need language experts. Will they work for free? No! Then who will pay them?
Other examples are nice fonts, Video-Editing Software, Audio-Production Software (Cubase or Logic created as open-source from the community? Come on!), Handwriting Recognition, OCR, Home-banking Software and so on. For each of the software programs mentioned, there is a half-assed open source clone. All of them each are not taken seriously by those who really work in the respective field. Can Gimp replace Photshope/PE? What about Ardour for Logic/Cubase replacement? Is there an alternative for Adobe Acrobat? I don't think so.
Anyway there are two conclusions you can draw:
1.) It is simply not true. Gimp rocks, and I have to relearn everything I know, and I am not willing to change, and it is all my fault that I have problems, and FLOSS basically rocks. Anyway, the makers of the Distribution have provided, from their repositories, me with every software program I will ever need.
2.) You should try to make it _easy_ for ISV's to target Linux as a platform.
Apparently lots of Linux users choose #1 and write long, screeching blog posts about the benefits of apt-get. Unfortunately, the major players are not listening. If you are an ISV, shipping software for Linux is not worth your time and resources. Either you should, like Opera, test your binary against zillions of distributions, or you should not ship a Linux-version at all.
Even for open-source developers, this state of affairs sucks. Take for example Anki, which is one of my favorite tools for learning a foreign language (I never said that open-source apps will never work). Anki is basically the effort of one developer. It is a nice application, but it is also not as "big" as a full-blown commercial application (i.e. those that you must shell out 70€ for). Basically, Anki is donationware. Apps like Anki, which were shareware ten years ago, are now usually developed as open-source+donations, and it halfway works. However, these applications are usually neat yet small tools: 7zip, Anki, Miranda, etc.
The developer of Anki does not have the time to test his cross-platform application against zillions of distributions. For Linux, there is only an Ubuntu package. However, this guy develops fast, and there is usually a new version every month (or sometimes every couple of weeks). The _one_ Windows binary works on all versions of Windows. The _one_ MacOS binary mostly works on all major versions of MacOS.
There is no package for the distribution I am currently running, OpenSuse. It is not in the repository, and if a software package is not in the repository, the user is _lost_. What are his options? Should he recompile every time a new version is released (sometimes every two weeks), because make uninstall is known to just work? I tried to alien the ubuntu package, but it does not work. I also tried to manually compile the program, but the compilation failed because of some weird dependency problems with QT which I could not understand. In fact, it is easier to ship a piece of software for a Hackintosh than it is to ship it for Linux. Think about that.
This problem will not change. Lusers do not like to shell out money for applications, and they do not like commercial applications in general. Software makers do not ship anything for Linux because they have no clue what they need to ship. Common users do not use Linux because the commercial applications are not present. Therefore, the situation will not change. Regarding LSB, I think we have covered that already.
2.) Drivers
If you are some independent maker of hardware and want your device to run under Linux, you are basically required to open source your drivers. Any other option will not work for your users. There is no way you could do something (yeah, I know it sounds like a completely weird idea) like shipping a driver-cd with your product. You could also go the NVidia route. That might work if someone really wants your hardware to work under Linux, and you are well compensated for your efforts. NVidia is an exception.
Pushing people in this manner to open-source their drivers actually works in the server world. If some Fortune 500 is using Linux on their servers, and they buy 1000 servers with Intel mobos, those boards are required to work. The company shells out a lot of money, so there is a real financial incentive for Intel to open-source their drivers if they want to sell their hardware. This works because Linux has a significant market-share in the server-world.
On the desktop, Linux is not even remotely in the position to make these demands. All the freetards, however, act like the unknown maker of your webcam absolutely _needs_ Linux-compatibility for their device to be sold. Do you really think they will open-source their drivers _ever_? The freetards are saying, "I am insignificant, yet the world should adapt to me. I will never adapt to the realities of the world!" That attitude just does not work.
There is still no stable driver-ABI, so the driver situation on the desktop won't change. Linux on the desktop? A joke.
Unfortunately, both of these problems are not technical issues. They are instead dogmatic issues. This is why Linux will not take off on the desktop. These two problem have existed for 15 years, and if you install Linux on your desktop today, you still face the same problems.
Fifteen years ago winmodems were the problem. Today it is wireless lans. Fifteen years ago your GDI-printer did not work. Today your printer/fax-combo does not work. Fifteen years ago you wanted to install the new Netscape 4.x. Nowadays you want to install Firefox 3, but your distro is not shipping new packages for the next six months.
Linux on the desktop is a joke. Nothing more.