# Collection of Interesting Articles on OSS



## Rahim (Jun 23, 2009)

With dwindling threads in Open Source Section I have decided to post articles from my feeds that I find interesting.  would be adding more articles in this thread.

5 Ways to Decide on a Linux Distribution
Ken Hess's Linux Blog

Prejudices and opinions aside, at some point in your career you'll be asked to select a viable Linux distribution for your corporate network. How will you choose? Will you use the same distribution that you use at home or will you do some research and find something that's corporate-ready? Are you up to the task? Do you know what to look for in a distribution to support a corporate environment?

Here are 5 ways to decide on a Linux distribution for your corporate network.

*1. Commercial Support *- This is a sore subject among some Linux types since most believe they can solve any foreseeable problem or glitch that happens. When you're dealing with multiple--possibly hundreds--server systems, sometimes you need help, you need it fast and you need to have it setup and ready before you need it. Your distribution should be backed by a stable company--a community just won't do when you're faced with a major outage situation and the clock is ticking and you don't have time to troll forums or "google" for an answer.

*2. Multiple Repositories* - A repository is how apt-get, yum, smart and other repository querying tools reach out and grab updates and new software for your distribution. Most distributions do have multiple repositories, however, I think that if one were to count up the available number of repositories worldwide, Debian and its derivatives would have the edge. Still there's also the possibility of creating your own software repository and I highly recommend the practice. Use wget or some other automated recursive download tool to keep your repository in sync with one of the remote ones.

*3. Security* - The distribution you choose must also have a dedication to security. It must be backed by a vigilant security team who must update as frequently as necessary to mitigate any security issues with OS-level or application-level security flaws. Being open source has its disadvantages as well as many advantages. One of those significant disadvantages is that black hatted hackers have access to the source code as well and may exploit any weaknesses in the software. Ask a lot of questions about security to make sure you're protected.

*4. Usability* - There is really only one way to determine usability for you and your users: Download and install. You can't depend on conjecture, marketing or emotion to make your decision. Work with the product. Have your users work with the product. Look at the available administrative tools and put them through their paces. Your distributions should have equally accessible and usable tools in KDE/GNOME and at the command line. Yes, the dreaded command line since most server systems won't, or shouldn't, have a GUI installed.

*5. Price* - Let's be realistic here; price is important. In fact, it might be the most important aspect when choosing a distribution for corporate adoption. Yes, many distributions are free but remember the first item in this list: Commercial Support. Free is great but unless you have a group of extremely talented people supporting your infrastructure, you'll need it. Compare prices for your narrowed-down list of distrubutions and determine whether you can live without some of a more expensive distribution's perks. Don't forget to ask for a volume discount if you're purchasing multiple copies or are supporting a large installation. Often they're offered up front but it never hurts to negotiate a better deal and everything's negotiable.

You might feel strongly about <Insert obscure Linux distribution here> but is it a good fit for a corporate environment. What about support? What about price? What about security and software updates? Would you be comfortable running your business on it?

My best advice is to go with a distribution that makes everyone happy: You, other administrators, management and the accountants.

Write back and tell me about your personal experiences with selecting a Linux distribution for your corporate environment. Did you choose a single distribution or multiple? Are you happy with your selection?


----------



## Rahim (Jun 24, 2009)

*Does the Linux Desktop Innovate Too Much?*

Does the Linux Desktop Innovate Too Much?
Bruce Byfield

For the last eighteen months, the GNU/Linux desktop has been in a period of radical innovation. KDE 4 introduced new features and workflows. Mark Shuttleworth launched Ubuntu on a unilateral redesign campaign, starting with notifications. GNOME announced a new desktop that, so far as anyone can tell, will profoundly change the user-experience. 

These innovations are likely to continue for at least another couple of release cycles, with upcoming versions of KDE scheduled to put social networking into applications and remote windows on to the desktops of passing computers. 

Yet in the middle of all these experiments, nobody seems to be asking a basic question: Does the average user want any of these things? 

Personally, I love these innovations, every one of them. I'm a tinkerer who likes to play with new things and write about them. Some of these experiments may succeed more than others, and some I consider outright failures, but I don't tire of any of them.  

Their number suggests that the free desktop is in a healthy state and has surpassed proprietary ones, and I'm proud of that. 

However, people who share my enthusiasm for innovation seem to be the minority. Whenever KDE 4 is mentioned in an article online, the comments are sure to include complaints that KDE 3.5 was better. 

Similarly, an article I recently published on GNOME Shell, the basis for the new GNOME desktop, inspired only condemnations of the program, even though its final form at this stage is anybody's guess. 

Admittedly, commenters may not represent general attitudes. We have no way of knowing whether they do. Yet the fact that most of the praises for these innovations come from people who participate in the projects involved seems suggestive. 

Under these circumstances, the free software community needs to consider the pros and cons of these innovations -- not one at a time, but as a whole. 

Is there a compelling argument for innovation? Or has the free desktop reached a point where it satisfies most users and any attempt to change its current state is going to be regarded as an unwarranted intrusion on the average person's activities? 

And, if so, what can be done to improve the situation? 
The case for change 

On an abstract level, few free software users are likely to find much that is objectionable in the arguments in favor of innovation. 

For example, a year ago, Shuttleworth is reported as saying, "The great task in front of us over the next two years is to lift the experience of the Linux desktop from something that is stable and robust and not so pretty, into something that is art . . . . I see this [need] for free software – beautiful, elegant software. We have to invest in making this desktop beautiful and useful." 

More recently, KDE developer Aaron Seigo defended KDE's upcoming social desktop with similar rhetoric. 

Many online services, Seigo points out, are not free software, and cannot ensure protection of data or privacy -- the implication being that, in contrast, KDE's social desktop suffers from none of these problems. 

He notes, too, that "The innovation essentially stopped at 'things I used to do on paper'. I want to do more than just have an easy place to dump my embarrassing photos of others from last night, keep up a public journal, read an annotated map or exchange small blocks of text with others. I want the network to make my computing life more interesting, more immersive and more useful. The innovation has all but dried up in social networking, however, and what we have is an electronic version of the library and post office. A really freaking cool library and post-office, but that's about it. We can do better than that, can't we?" 

Seigo goes on to say that free software is uniquely positioned to improve on the current standard. As a community, it already understands the concepts of community behind social networking. Nor is it constrained by financial considerations in its quest for innovation. 

Then Seigo paints a utopian vision of the possible future: "I see our computers becoming helpers rather than mildly frustrating tools; I see services becoming a true web of interacting greatness rather than silos with the occasional rickety handmade (and often one-way) rope bridge between them; I see 'social networking' and 'personal rights and freedoms' being mutually supporting at every level." 

Like Shuttleworth, Seigo is invoking the motherhood issues of the community. Both are talking about concerns close to every free software advocate's heart. Reading their rhetoric, you can easily be swept away by its visionary scope, and find yourself nodding excitedly. 

Not only are they talking about realizing your dreams, but they are talking about doing so in the very near future. 

Who could resist? 
The desktop is not a destination 

The trouble with the rhetoric of innovators is that it exists on an abstract plane, not a practical one. Many of the same people whose hearts beat faster at the rhetoric's promises are likely to behave very differently when they turn from reading to focusing on what they have to.

2nd Part: Do users need or want anything more?
3rd Part: Time for a reality check


----------



## Rahim (Jul 19, 2009)

Linux Sucks!!!
Posted by tuxxie Friday, July 17, 2009

Linux is gaining momentum and people are starting to switch over to this computer operating system. I have been using GNU/Linux for years and would like to warn you about it. My conscience wouldn't allow me not to speak out about the OS. Linux is a free operating system that anyone can download and use.

Imagine, you don't have to pay a penny to get Linux to run on your computer. I know what you are thinking, is this another of those recession inventions? It will trick you into using it for a while for free and then pop up a window and ask for your credit card... But no. Linux is not like the other operating systems you normally use. It is much more evil. Not only it is for free, but it is also completely legal to download and distribute to as many people as you want. Now, when was the last time you downloaded something legally? I know, it is impossible to even think about it. Every time a security update pops up, you can actually download it without the fear that your illegal version of a program will stop functioning afterwards. Where the hell is the fun in that?!? Do you remember the thrill of looking for cracks all over the internet, and many of them often came in a combo with a nice virus that gave you something to do for the next few days? How dare Linux take that away from people? Most of the stuff with Linux works right out of the box. The popular distributions out there are very easy to install and you are ready to surf the web, edit pictures, burn cd/dvds and many other features right away. It will take time till you get used to not dishing out your hard earn cash for a simple upgrade or mediocre application. Your free word processor will not expire after 3 months of use and after saving your files in a format that nothing else can open. I miss entering the numbers of my credit card into the system every other day. I know them by heart and it pains me that I am unable to utilize this knowledge. All of this because of Linux. It has thousands of free quality applications that range from great games to audio/video editing and other professional software. This just makes you feel cheated. However, Linux has been listening to us and trying to fix this problem. With the application called Wine you can still use your expensive Windows software. Once you install it, you will be able to play all your favorite games and run other applications that you paid for so gladly in the past. But that is about the only thing that is fun about Linux. Everything else is so boring, legal and free. Hopefully one day I can get over it. I still haven't figured out what to do with the extra money I saved...

The worst thing of all is that I have lost most of my friends, namely Mike, Chris and Rob from the software store. They used to call me twice a week, ask how I was and tell me about the new software they just got. I shouldn't say that, but they would also offer me discounts if I bought several at the time. However, when they called last time, I told them that I got GNU/Linux. They were outraged. I haven't heard from them since. Linux destroys the most precious of friendships you build throughout the years. I miss you my best friends. Please, call me back sometimes.

GNU/Linux is also very customizable. That means you can change it any way you want. You are not locked in to proprietary software. It is about having freedom and no one company is dictating the rules to you. Install whatever you want, how you want and change things the way you want. No rules and no restrictions. I don't think I can handle so much freedom. I never had to make decisions for myself before.

Linux also looks really cool. If you though Mac's were great than you will be amazed at Linux. Just install Compiz Fusion and you will be able to have amazing eyecandy. This application will give you a 3D cube, wobbly windows, expo, widget layer, 3D windows, shift switcher and many other cool effects that will make your Windows and Mac OS friends jealous. You can install any themes and icon sets you want. The only limit is your creativity. The good thing about this is that you can make Linux look like Windows and Mac OS so your friends still think you are cool as them.

Please stay away from GNU/Linux because it is a computer revolution. It is slowly changing the computer industry in major ways. Even Google is creating their own operating system that will be based on the Linux kernel. What are they thinking? Linux for the masses? Oh no! How are we going to get used to not paying hundreds of dollars for an OS. We would have to change our way of thinking and I don't think majority people are ready for that. We cannot allow this to happen. I have been using GNU/Linux for years and I still can't get used to such a great operating system. I am happy with it and the great software that I am using. Please take my warning seriously! GNU/Linux is awful and I hope you stay away from it. It is better to pay high prices for an OS that you do not like and the company will dictate to you what you can or cannot do with it, what software you can install and treat you like a criminal. If you do not take my warning seriously and will install GNU/Linux I hope that you are ready for what is to follow. Don't say I haven't warned you. Think hard about switching to GNU/Linux.


----------



## Absolute0 (Sep 10, 2009)

good articles 

but the forum is suffering from inactivity


----------



## Rahim (Sep 10, 2009)

^I know


----------



## hellknight (Sep 11, 2009)

this forum or should i say this section died when Praka was banned..


----------



## Krow (Sep 11, 2009)

^Yeah agreed 100%. The die hard winboys have achieved their goal it seems. 
But Rahim is trying to save this section it seems. Good work Rahim! 

Well Rahim, since I am not getting much of help for my mag, I'll take these articles.


----------



## Rahim (Sep 11, 2009)

^Yes you can 
arey yaar kuch tum log bhi contribute karo is OSS mein.
-----------------------------------------
Posted again:
-----------------------------------------
Sexism: Open Source Software's Dirty Little Secret
Bruce Byfield

On September 19th, the GNOME Foundation and the Free Software Foundation will host a mini-summit on how to increase women's participation in the free and open source software (FOSS) communities. The summit is probably an effort to repair relationships between the two foundations after Richard Stallman was pilloried for sexism after his keynote in Gran Canaria a couple of months ago. 

However, regardless of its reasons, the summit represents one of the first official recognitions of an open secret: sexism is systemic in FOSS, and has been for years. 

Of course, that is not what the official mythology says. Officially, the FOSS community is a meritocracy, where characteristics like gender don't matter, and everybody is judged only on their contributions. 

For instance, in describing FOSS culture, Eric Raymond writes: 
"Hackerdom is still predominantly male. However, the percentage of women is clearly higher than the low-single-digit range typical for technical professions, and female hackers are generally respected and dealt with as equals. . . . When asked, hackers often ascribe their culture's gender- and color-blindness to a positive effect of text-only network channels, and this is doubtless a powerful influence. Also, the ties many hackers have to AI research and SF literature may have helped them to develop an idea of personhood that is inclusive rather than exclusive --- after all, if one's imagination readily grants full human rights to future AI programs, robots, dolphins, and extraterrestrial aliens, mere color and gender can't seem very important any more." 
Raymond is not read much any more, but many members of the FOSS community voice almost identical convictions. 

But the figures prove the conventional mythology wrong. True, women continue to be under-represented in computing science and high-tech business in general, not just FOSS. According to Angela Byron's keynote at the Open Web Vancouver conference earlier this year, women compose 28% of those involved in proprietary software, slightly more than half what you would expect from a random distribution. 

Asked to guess what percentage of FOSS developers are women, mostly people guess a number between 30-45%. A few, either more observant or anticipating a trick question after hearing the proprietary figure, guess 12-16%. The exact figure, though, is even lower: 1.5% 

In other words, women's participation in FOSS development is over seventeen times lower than it is in proprietary software development. Proprietary software is inferior in so many ways to FOSS that the fact that it is more successful in recruiting women highlights, more than anything else, that FOSS has a problem -- even if you allow for the widest possible margins of error. The figures are a galling reversal of what those of us in the FOSS community prefer to believe. We're the idealistic and progressive ones, we like to believe. 

The situation is slightly better here and there. Byron suggested that Drupal, the project she is mainly involved in, consists of about 12% women. In conversation, Aaron Seigo suggested a similar percentage for KDE. You can also point out individual women who have made their mark in FOSS, such as Stormy Peters of the GNOME Foundation, or Carla Schroder, the editor of Linux Today. Yet such exceptions do not change the overall situation. 
The GNU/Linux Desktop and Borrowed Assumptions about Usability

Look at the boards of prominent FOSS projects. No women sit on the Free Software Foundation's board of directors, nor the Linux Foundation's board. KDE e.V's board has one woman, and GNOME's board one. If anything, the number of female maintainers in Debian is probably lower than 1.5%. The figures don't change much, no matter which FOSS organization you look at. 

Or, if you prefer, listen to the horror stories female developers tell about sexist remarks or being asked out for dates. Look at the constant trolls on the mailing lists for female developers. 

Or notice how raising the issue inevitably brings accusations of exaggeration and political correctness. Despite the obviousness of the evidence, many people would prefer to pretend that the problem doesn't exist except in the minds of a few feminist radicals. And, while everyone is in denial, the FOSS community loses female -- and male -- developers when it needs every possible volunteer. 
A culture of denial 

The problem is not new. People have been discussing it at least since 1998, when Deb Richardson founded LinuxChix to combat it. Moreover, Val Henson's "Howto Encourage Women in Linux" pinpointed the problem, its causes, and the most common reactions to it as long ago as 2002.

However, since then, about the only thing that has changed is that every FOSS conference has a panel discussion on the topic. The community acts as though giving the subject limited recognition is sufficient; everyone can get on with ignoring it. 

Instead, when the subject comes up, everybody goes into a frenzy of rationalization. They may say that women are less technologically oriented than men, or lack the confidence or socialization to participate in the free-for-all discussions in which FOSS is developed. They may say that women lack role models, or are less likely to become obsessively devoted to an idealistic cause. A favorite comment is that FOSS is a meritocracy, and only a few women measure up to the standards needed for contributors (although more might if everyone is just patient). 

Some of these rationalizations may even have some truth to them. Yet the fact that the same problems do not stop proprietary development from having greater female participation shows that they are not enough to explain what is happening. Nor is explaining the problem a substitute for taking action. 

Yet, if anything, the situation is getting worse. The front page of LinuxChix is still active, but its mailing list and chapter pages show little activity in the past few years. Debian Women is much the same, while KDE Women appears entirely inactive. 

Typically, such groups are active for a while, encourage a few women to participate in FOSS projects, then settle down to a core of a few members who keep loosely in touch with each other. Occasionally, activity flares, but the groups are largely isolated in their communities, and do little to affect the overall culture. 

Meanwhile, the culture of denial continues. When obvious examples of sexism occur, such as Stallman's keynote -- or, much worse, Matt Aimonetti's "Perform like a pr0n star" presentation at a Ruby conference last spring -- provoke outrage, the complaints are too often dismissed. 

Aimonetti's presentation, for example, was defended as part of the edginess of Ruby culture. Similarly, defenders of Stallman claimed that those who took offense were attacking Stallman's character because of his anti-Mono comments. Although these examples are extreme, they show just how far some of the FOSS community will go to pretend that sexism doesn't exist. 
Looking for solutions 

Part of the reason for the sexism in the culture may be that FOSS arose on the Internet. The anonymity that the Web allows has always encouraged flame bait, and no doubt some of the sexism is simply an extreme example. The aggressive language in FOSS development probably comes from the same source. Nor is there much doubt that online culture was originally male-centric, because historically men tended to get connected sooner than women, and that is what they created. 

Yet none of these origins are insurmountable. What would happen if the women's groups became part of the power structure of projects, instead of being primarily self-referencing sub-groups? Or if women's mentoring programs were established that both brought individual women along and actively recruited them to act in the mainstream of projects? 

Perhaps the most useful step might be a code of conduct with zero tolerance for sexism. A code of conduct assures that the development of Ubuntu and several other projects is conducted politely and constructively, and seems to have no ill effects. If sexism was outlawed as strongly as rudeness or personal attacks, and people were encouraged to speak out against it, perhaps the atmosphere in FOSS projects would become more friendly to women. 

But all these measures depend on admitting that sexism exists in FOSS. While I don't expect miracles from the upcoming summit, if nothing else perhaps it can start to put an end to the denial. FOSS is such an idealistic movement in other ways, I suspect there must be thousands like me who prefer not to see it disfigured by sexism any longer -- especially when a united effort could cure the problem.


----------



## Rahim (Sep 16, 2009)

*Open Source Should Be Open To All*

By Beth Lynn Eicher and Moose​
*Let's talk about elitism.*

Specifically, let's talk about *operating system elitism*. How many of you only run Linux and push everyone you meet to do the same? 

That's elitism. Elitism causes people to think they're better than others, and causes others to feel belittled. 

You don't convince someone that your way is right by telling them they're wrong or stupid. You show them what a difference can be made. 

Not everyone who uses a vendor-OS is doing so because they like it. Sometimes you're stuck with software for your job. And even if they do like their vendor-OS, people are resistant to change. 

Often the way is baby steps. If today you can convince someone to use some Open-Source software package – a document creator, a presentation tool, a web browser, or even a web server – you give yourself the wedge to start pushing for an Open Source operating system. 

Teach the users to learn to love open source by introducing the software, application by application, and they will come to see the light. When you can show a business how much they can save with Open Source, you've made a friend. But when you've shown a single person the difference Open Source can make in his or her life, you've made an ally. 

End the elitism. The vendors who refuse to release open source software are the enemy, not the users. Don't shun users of vendor-OSes. Don't sneer at them, don't talk down to them. Instead, learn what they need, and show them how OSS can help fill that need. 

*Let's talk about BSD.*

It is counter-productive to argue that Linux is better than BSD. BSD is not the enemy of Linux. Elitism is the enemy of Linux. 

The BSD people are fighting for the same thing as the Linux communities. BSD is the ally of Linux. When it comes down to it, Linux and BSD users like the same software. The communities should be working together for their shared cause – Free and Open computing for all. 

Everyone has their preferences. Everyone prefers a flavor of ice cream. It doesn't matter what the flavor is, it's still ice cream. You don't reject your friend because he prefers a flavor you can't stand. 

Instead look at FreeBSD, NetBSD, and OpenBSD as distributions with different goals. 

Let the BSD vs. Linux wars end. Let's see both sides supporting and helping each for the common goal. 

*Let's talk about people. *

A frequent subject these days is the lack of certain people within OSS projects. Typically this is about women, who are often disenfranchised in the Open Source communities. While no intent to minimize this is being made, they're not the only people who get lost in the OSS world. 

The problem of the lack of minorities in computing as a whole reaches into the OSS world. While you can't tell someone's ethnicity from a mailing list or IRC channel, what about conferences, user groups, hackathons and other in-person events. Would you want to go to one of these events if you knew nobody else who looks like you will be there? 

Disabilities is a tricky problem. Are your websites and source code lockers usable by the visually impaired? Is your LUG meeting in a place that's wheelchair accessible? Could you find an ASL interpreter if needed? Often the answers to this question is, "But nobody ever asked us for this stuff!" If you want disabled people to be in your community you need to make it clear they are welcome. 

Elitism causes people not to think of others. It makes the assumption that because YOU are comfortable with the way things are so is everyone else. Elitism is the reason why many women are told, "You can't understand this." But for women you can substitute lots of other people – minorities, the disabled, and, yes, vendor-OS users, among them. 

*Let's talk about Open Source. *

Let's end the Elitism. It hurts feelings. It hurts the OSS cause. It slows things down. Every time you sneer at someone who isn't a Linux user you are losing the chance to convince them to come to Open Source. Every time you don't help others get involved you are losing the opportunity to have a valuable contributor. Let's let Open Source truly be Open. 

Beth Lynn Eicher is a professional system administrator and is the co-Chair of this year's Ohio LinuxFest. She runs around the country espousing the joys of Linux and other Open Source fun. 

Moose is a disabled unemployed system administrator who is the Speakers Chair and the Diversity in Open Source Workshop coordinator for this year's Ohio LinuxFest. When not playing conference organizer Moose goes to OSS conferences to Preach the Word of OpenAFS.


----------



## Cool G5 (Sep 16, 2009)

Those are some really great articles Rahim. Keep it up!
And don't worry I'm here to read your posts. 

===========================================
*How to Contribute to Open Source World*​
*Article Source*

Open Source is all about contributing one’s skill, knowledge & even money if one can. If you ever thought to contribute to open source but never started it due to lack of idea, then reading this piece of article will help you a long way. As usual with Open Source, possibilities are endless but here I will list some which I feel will suit most of you.

1) Open Source projects require programmers & bug patchers. If you have coding language then you can participate in the coding team of a distro or an application. Bug patchers can create bug fixes as & when any bugs are reported in an application by a user.

2) If coding is not your cup of tea, you can try your hand at writing. Most of the projects require a user friendly set of instructions so that users can refer it in case of difficulties. You can write how-to’s, general documentations etc as directed by your project leader. You can even help in translating the documentation if you know other languages.

3) If you have a designer’s perspective then you can send your art-work to be used in the form of Wallpapers, Screensavers, Icons etc. Besides submitting your art-work to the official distro graphic department you can even help by posting it on sites like www.kde-look.org or www.gnome-look.org .

4) Most distro have an option for package maintainers. Package maintainers are those folks who compile the package from source code & package it to a binary which can be installed easily by less experienced users. e.g. A Ubuntu package maintainer will compile packages in .deb format while a Fedora package maintainer will do it in .rpm. As a package maintainer you will also need to keep updating the specific package as and when they get updated.

5) If you own none of the above skills then you can perhaps share your knowledge regarding troubleshooting problems on online linux forums. Each distro has its official forum. Registering takes hardly a minute & you can then go & post to your heart’s content. Post informative threads & that’s enough to help the overall community.

6) In addition to forums you can even promote Open Source & spread it by means of your personal blog or on your website. You’re free to post articles, reviews, bugs on your blog which can be read all over the world including the official distro/package maintainers. If you point out any bugs in software/distro on your blog, there exist a possibility  it can be read & acted upon by the distro/package specific team. Even if you share basic knowledge regarding Linux & Open Source it can be of tremendous help to people who are fresh to the Linux world. If you are a veteran in the Open Source field then you should seriously begin a website.

7) If you don’t have time or competency but still want to contribute to Open Source then too you can help. You can donate some of your money to open source. Most of Open Source projects require funds in order to continue their development. In addition to donating money, you can even donate some server space, blank medias(CD/DVD) to the needy Open Source projects.

8 ) If you feel you can do none of the above don’t get disheartened. You can always promote open source in the form of mouth publicity, speaking at seminars, media etc. Though you might feel this to be insignificant but it’s not that insignificant either.

Whichever way you decide to go, remember to read the project guidelines before submitting your work lest you have to face the rejection stamp. Go ahead, do your bit in making the Open Source World grow bigger & better. Start early!


----------



## Rahim (Sep 16, 2009)

Thanks Gaurav........Lets put some active life in this otherwise Dead Section


----------



## Cool G5 (Sep 16, 2009)

a_rahim said:


> Thanks Gaurav........Lets put some active life in this otherwise Dead Section



Sure. Let's work in tandem & rock OSS. 8)


----------



## FilledVoid (Sep 17, 2009)

Looking forward to see many more interesting articles.


----------



## Rahim (Sep 17, 2009)

^Nice to see you back


----------



## Cool G5 (Sep 17, 2009)

*Linux Live Environment Explained*

*Article Source*

Wouldn’t it be nice if you would be able to try out Linux distro without installing it on your computer? You can do it via virtual machine running under a host OS but that calls for some atrocious system requirements. The process though is simple but still demands a lot of time to get the OS up & running. To remedy this situation, most of the Linux distros offer a Live mode commonly referred as live environment.

 Live mode/environment in simpler terms simply refers to running an linux OS without installing it. Majority of well known Linux distro bring out a Live version known as “Live CD” which lets you run the OS without installing it. The Live OS boots mostly via CD but it can be also installed to pendrive, memory cards etc. Once the Live media boots, the whole OS gets loaded in the system’s memory i.e RAM & works from there.


*Booting into Live Environment :*

 To boot into a Live CD, download & burn the appropriate iso to the CD. Start your computer & press the predefined key to change the boot sequence (Mostly F2, F10 or Del key). Set the computer to boot via CD-ROM if you are using a LIVE CD. Incase you want to boot from pendrive, set your computer to boot via USB. Follow the onscreen instructions & you have your live environment up & running.


*Live environment offers a host of advantages. Let’s have a look at them.*

 1) The primary advantage of live environment is to check a newly released linux OS before you decide whether to install or not onto your hard drive. This is what most of users use the live media for.


 2) The secondary reason for you to use live environment is to check whether the linux distro is compatible with your hardware. At times you install an linux distro only to find that it doesn’t support your age old printer. To avoid the headache, you can just boot via live media & check whether all your hardware function just the way you expect them to.


 3) When you work on a computer which is not yours, the owner of the system will most probably not let you install, remove or modify application & settings. To set yourself free from this tangle the linux live environment can be used. Since it (Live environment) doesn’t make changes to the host system the system owner would most probably not mind you booting into a live environment.


 4) You have been taught not to access banking or other sites of personal interest on public computers. Again linux live environment comes to your rescue since you can boot into it on public computers without the fear of leaving any trail of your activities on the host system. Since live environment works from computers memory, the application data & settings are lost once you shutdown the pc or logout from the live environment.


 5) In the event of crisis an linux live media can make your day. Imagine if the OS installed on your computer fails to boot & you need to access a very important file. What will you do? Simple. Just boot via linux live media & voila you can now access that important file. If you desire you can even try to repair your OS via the live media.


 6) If you search around there are many full fledged linux distros which are designed to run off a pendrive. You get all your favorite applications & also the power of linux at your disposal if you hate the OS installed on the particular computer. A good example of full fledged linux distro which can via pendrives is Damn Small Linux.
​


----------



## Rahim (Sep 18, 2009)

*Please Reinstate the OS Wars​*​​
Ken Hess​_All_ the glass clicking and cheers of late surrounding the apparent conversion of Microsoft to the open source fold needs to stop. We need the Cold War. We need Communism. And, yes, we need the OS Wars. Like any war, the OS Wars stimulate creativity, spark religious battles and divide the wannabes from the true innovators.

Give me back the days of the Linux zealots who hate Microsoft so much that they remove Washington state from the US Map. Return me to those days of all Microsoft shops that threaten firing to anyone even uttering the word 'Linux' on company property. Send me back in time to the days of "Ken, why are you wasting your time with Linux?" I want to hear Microsoft bigots pronounce Linux with a long I.
Where are the days before every Windows desktop ran a Linux virtual machine? Where indeed. 

I want to wax nostalgic about the strange days of Microsoft's open source strategy to kill Linux and how it didn't work. The annals of history recorded that their attempt was a failure. Reminiscing about how every open source company shook hands with Microsoft and became a collective force against nothing is what I want.

We need an enemy. We need for Microsoft to be the Spain and Britain of colonial times when planting your flag on a land and oppressing its people meant something. It meant competition. It meant conquering new territory and claiming it for your own--natives be damned!

I want our victories to be victories of valor and of painful wounds--and most of all to be victories of a distinct belief system. I want real victories not Masada-esque ones. My dream is for Steve Ballmer to send a messenger to Linus Torvalds demanding that he and his merry band of 300 developers surrender their code to him and for Linus' response back to be: "Come get it."

We need Microsoft. We need for them to be our enemy--our sworn enemy. They and their kind are evil. They represent the evil empire. We need the OS Wars lest we fall prey to their evil-undoings and become part of their evil plot to destroy all that is good in the world.

May the best OS win.


----------



## Rahim (Sep 23, 2009)

Does the Linux desktop need to be popular?​By Austin Modine​How to win users and influence developers

LinuxCon 2009 Does Linux desktop even need to be popular? There are, shall we say, differing options among the open source cognoscenti gathered in Portland, Oregon this week for the annual LinuxCon.

For the last eight years, we've been told it's the year of the Linux desktop. Yet penetration figures have remained somewhere in the region of 0 to 1 per cent.


The top brass at the Linux Foundation don't seem particularly interested in desktop uptake these days. They prefer to press towards successes in end-user device and mobile phone markets rather than worrying about turning hearts against Windows and OS X.

"The thing that is much more interesting to me is whether or not someone who chooses to use Linux as their desktop can do so in a relatively pain-free way," said Ted Ts'o, chief technology officer for the Linux Foundation.

"I don't know that it's important that everyone or some substantially large percentage of the user population is using Linux as a desktop. And it's not clear that it's ever going to happen," he added.

Bob Sutor, vp of open source and Linux for IBM takes a middle-road, saying "if the Linux desktop got in the double digits among a broad range of people, then it's time to declare victory."

Meanwhile, others are not nearly so resigned on the matter. Community manager for openSUSE Joe Brockmeier told LinuxCon on Tuesday that mainstream success on the desktop is crucial for what needs to be done.

"We're getting our asses kicked," he said. "We're not capitalizing on the opportunities we've had." Brockmeier's solution is to focus on what he sees as weak points in the Linux desktop community: marketing, app development, accessibility, and unity.

For marketing, he said the Linux community needs a better way to get distros out to the public. "Most people don't want to download their operating system," he said. "They don't want to have to burn it to a CD. They don't want to have to worry about rpm versus V package. We have a lot of work to do to get Linux into people's hands and to do it in a way that's comfortable for them rather than us."

Brockmeier called on PC vendors to help promote Linux beyond offering it as a check-box in an order form.

"We are outgunned. We need some help and so I'm asking not only Novell and Red Hat and other companies that are in the Linux space, but also the OEMs to put some of their marketing muscle behind Linux because we need that to succeed."

He also wants Linux backers to work more closely together in order to promote the operating system as a whole, regardless of the distribution.

In the application space, Brockmeier said there's not enough developers working on end-user apps, and those that do too often lose interest before it can be polished into something appealing to the average user.

"The iPhone has been in existence for two years, and they're already skunking us on applications. Imagine what's going to happen when Apple actually comes out with a tablet or something like that," he said. "We need to really focus and worry about development of new applications. And that's not easy because the main distros have cut back a little bit on developing new end-user apps — I know Novell has, because frankly, that's not where we make our money."

Brockmeier said too many applications today are abandoned, redone, or determined to be 'good enough' when they're only 80 per cent complete.

"How many network managers do we have to go through before we finally commit to fixing one? How many sound systems do we have to go through before fixing one?"

Accessibility is also a hurdle to desktop Linux adoption, according to Brockeir.

"Usability has increased dramatically since 10 years ago, however it has not kept up with other platforms," he said. "This is something developers need to keep in mind. Good enough for Linus for and end-user application probably isn't good enough for my mom."

Finally, Brockmeier finds the often heated Linux development process on in public mailing lists discrediting to the public. "There are people that are very destructive in the way they approach problems. They jump from 'this might be a problem' to extraordinarily frothy and screaming about something. I don't think it helps Linux trying to achieve the mainstream. This isn't the face we want to present to the rest of the world."

Brockmeier said as a whole he believes the Linux community is in good shape, but it needs to be mindful of its weaknesses going forward.

What remains in the air is whether these troubles truly need to be cured or they're simply the unavoidable nature of the, er, penguin. ®


----------



## FilledVoid (Sep 23, 2009)

> "The thing that is much more interesting to me is whether or not someone who chooses to use Linux as their desktop can do so in a relatively pain-free way,"


This here is the key. Whenever this becomes an actual reality I'm 100% sure that Linux will start to see a larger population using it.


----------



## Cool G5 (Sep 23, 2009)

FilledVoid said:


> This here is the key. Whenever this becomes an actual reality I'm 100% sure that Linux will start to see a larger population using it.



It is a bit away from reality since I believe people are exposed to windows world right since they get their very first computers. If a newbie has been in an linux environment then perhaps he can adopt linux very easily. The point here is, due to familiarity with windows environment in neighbourhood, office or peers an individual finds it difficult to use or rather I would say feels the pain to learn linux concepts.


----------



## FilledVoid (Sep 23, 2009)

> It is a bit away from reality since I believe people are exposed to windows world right since they get their very first computers. If a newbie has been in an linux environment then perhaps he can adopt linux very easily. The point here is, due to familiarity with windows environment in neighbourhood, office or peers an individual finds it difficult to use or rather I would say feels the pain to learn linux concepts.



I guess to a certain extent that is true. However again Id like to just say that Ive been using Linux - Ubuntu and Arch mainly for the past 1-2 years and Id still say that somethings can be a hassle. I'm not saying its bad or anything. Im just saying that Id wish the learning curve was a bit less steeper.


----------



## Rahim (Sep 24, 2009)

The biggest problem I came across with Linux Developers are their *attitude*. Whenever someone raises some negatives or problematic areas of Linux, they become *over-aggressive*. I have even read some devs saying, "Linux don't need users."  So what do Linux need? A ego-centric developers and hackers? Normal users don't give a damn about features in the kernel. What they want is hassle-free day-to-day working.

I have shifted to Linux completely and believe me its not a bed of roses as the devs put it out. Small pesky things like drivers, resolution, some audio problem with some Intel chips,etc makes the experience worse and non-tech users are not gonna sit and fiddle with forums to solve the problem, which shouldn't have occurred in the first place!!

Linux developers should start giving their ears to constructive criticism. Not all users are on MS's payroll 

This philosophy is so attractive and self-consuming, I have began thinking this Linux to be MY OS  (Jaise lagta hai mere ghar ka bana hua hai)


----------



## thewisecrab (Sep 24, 2009)

^^
The key factor IMO is that Linux is made (AFAIK) by developers without a charge. If you slap in a pay-check, I'm pretty sure these wrinkles will be ironed out.

But by charging/giving a fees, it beats the purpose of FOSS  . So I guess either we'll have to iron things out themselves, or leave for them to gripe.

What I still dont get is that no OS, whether commercial or OSS, is completely fool-proof and without any problems whatsoever. So why do people expect Linux developers to be so efficient? I dont know.  

Not to mention the myths surrounding it among the Windows users in India. The computer-maintenance guy was shock to see me running Ubuntu+XP on my PC (he came as I wanted to RMA my printer). He was told by his seniors that Linux is CLI based and spreads viruses. 

Anyways, glad to see FilledVoid back! (I think this is the first time I'm seeing him post after many months)


----------



## FilledVoid (Sep 24, 2009)

> I have shifted to Linux completely and believe me its not a bed of roses as the devs put it out. Small pesky things like drivers, resolution, some audio problem with some Intel chips,etc makes the experience worse and non-tech users are not gonna sit and fiddle with forums to solve the problem, which shouldn't have occurred in the first place!!


Exactly! This is what new folks do NOT need. How many regular people do you know that would actually spend loads of time just to get a system up. Please note I'm not saying that every Linux distro requires tons of troubleshooting. I'm just pointing out that whenever the process for the end user to use Linux becomes pain free then only can you expect people to actually give it a whirl. Considering the developments made in Linux is it too much to ask for a small percentage of time devoted just to this. Which is why I admire Ubuntu just for the fact that they have tried to make it relatively easier for most folks to just use it (Can't vouch for others since I haven't used them as much). 


> So why do people expect Linux developers to be so efficient?


It is human to err. The question is to what extent are end users willing to go to fix that error. Although most hobbyists would love the opportunity to give it a go at recompiling a kernel or edit rc.conf to add a new daemon a regular person would scream bloody murder. 


> Not to mention the myths surrounding it among the Windows users in India. The computer-maintenance guy was shock to see me running Ubuntu+XP on my PC (he came as I wanted to RMA my printer). He was told by his seniors that Linux is CLI based and spreads viruses.


 Yes , I've had quite alot of those since I've put Ubuntu on pretty much all my relatives computers at this point. 


> Anyways, glad to see FilledVoid back! (I think this is the first time I'm seeing him post after many months)


Thanks bud.


----------



## Ratnadeep (Oct 12, 2009)

I think one more problem with Linux adoption in India is (near-about) compulsory requirement of Internet. My first introduction with Linux was 1.5 years ago. I tired ubuntu and OpenSuse then, but installing a single software was nightmare for me. Ubuntu .deb packages are of little help for people with no internet connection but still it can't cover a lot. Now with internet connection to my pc, the normal choice is linux.

Many of my friends are interested in using Linux and ubuntu is perfect fit for non geeks using simple day to day applications. But again if one requires an extra app it is very easy to install (easy to get as of windows majority) one in windows than that is in linux.

Again wine can't support all the windows applications that well. Let's see Ubuntu 9.10 is coming with inbuilt support for wine, hope it works very well with many of apps in windows. As ubuntu .deb installation is also as user friendly as windows (of course without license agreement). Then it will be highly easy to migrate to Ubuntu from windows.


----------



## Ratnadeep (Oct 12, 2009)

I think one more problem with Linux adoption in India is (near-about) compulsory requirement of Internet. My first introduction with Linux was 1.5 years ago. I tired ubuntu and OpenSuse then, but installing a single software was nightmare for me. Ubuntu .deb packages are of little help for people with no internet connection but still it can't cover a lot. Now with internet connection to my pc, the normal choice is linux.

Many of my friends are interested in using Linux and ubuntu is perfect fit for non geeks using simple day to day applications. But again if one requires an extra app it is very easy to install (easy to get as of windows majority) one in windows than that is in linux.

Again wine can't support all the windows applications that well. Let's see Ubuntu 9.10 is coming with inbuilt support for wine, hope it works very well with many of apps of windows. As ubuntu .deb installation is also as user friendly as windows (of course without license agreement). Then it will be highly easy to migrate to Ubuntu from windows.


----------



## Rahim (Oct 12, 2009)

^Agreed. Popular Linux Distros require an active net connection to show its real jalwa. One can agree that there are many distros which comes pre-installed with _the kitchen sink_ but most of the new users would pick up the most popular ones like Ubuntu or openSUSE. It is only after some jabs that they get the information about Sabayon, Mint ,etc.

AptonCD is "perfect" for those without net connection but it is like a lesser devil. No one wants to mess their hands with downloading apps somewhere and then make a cd and then install it somewhere else. Its only Linux fanatic who does not want to know the obvious hard truth(read: ignore) about Linux, would go to such length and toil with installing apps without a net connection.


----------



## Cool G5 (Oct 12, 2009)

*[FONT=&quot]Definitive Guide to Viruses & Antivirus softwares under GNU/Linux[/FONT]*​​*Article Source*

_____________________________________________________________________________________________________________________

 [FONT=&quot]After having travelled through the virus infested land of Windows, new Linux converts often have queries about viruses on their Linux systems. Here I will demystify all those queries related to Viruses, Antivirus software’s etc once & for all.[/FONT]

*[FONT=&quot]Which Antivirus software should I install under my Linux system? / Does Linux really doesn’t needs an Antivirus software? 

[/FONT]*      [FONT=&quot]Many veteran Linux users often get pissed off at novices asking this very question. But don’t worry let me answer this question for you if you have been bullied on forums by those veterans. I would like to say, “Linux really doesn’t needs antivirus software”. Yes that’s right. No antivirus is needed to be installed under a Linux system. The reason behind this is there are far more less viruses written for Linux than for Windows. The virus writers in search of fame often target Windows systems which is the reason behind the low number of viruses written for Linux. If I would to say the ratio is 1:10(Linux: Windows). Out of this, the real-time viruses i.e. the viruses which are currently active are very few. While Linux is gaining popularity day by day, but still the need to implement antivirus software in your Linux system is totally unwarranted as of now[/FONT]
   [FONT=&quot]. [/FONT]
*[FONT=&quot]After people get satisfied with the above answer, they ask “Will the Windows viruses cause any havoc on my Linux system?” 

[/FONT]*      [FONT=&quot]Here again the answer is No. Viruses are nothing but programs which are designed to cause damage or affect the normal functioning of a system or its software’s. A virus written for Windows will not work under Linux system simply because they are two different OS’s which work totally differently ways, have different security structure & perform every task in their own unique way. For e.g most of the windows viruses like to infect the system folders like System 32, Windows etc but these folders are not present in a Linux file system.  The virus will try its best to infect but it will fail due to unavailability of favourable conditions under Linux system. So you can be assured no windows virus, malware or spyware can infect your linux box. Also the viruses are not so smart that they can run on any platform like windows or linux. Hybrid viruses are still not available, so you need not to worry about them.[/FONT]

*[FONT=&quot]Does this means my Linux system is completely safe & sound from viruses?

[/FONT]*         [FONT=&quot]Viruses are designed by their writers to exploit a particular vulnerability in the operating system. Every operating system has its fair share of vulnerabilities which also includes Linux. But since the source code is publically available, fixing vulnerability under Linux & other open source software is fairly easy & fast. Thanks to the software programmers who are constantly monitoring the OS, software’s & even the Linux kernel. This means even if a virus outbreaks under Linux, it will be diffused sooner as the independent programmers can themselves work out on a patch without waiting for instructions from official distro patch team.[/FONT]

*[FONT=&quot]Still I feel unsafe. What should I do?

[/FONT]*         [FONT=&quot]Paranoids will be paranoids! For those out there, you can install a copy of Clam antivirus which is a free & open source antivirus designed to protect you from Linux viruses. Even Avast! Linux Home Edition is another free antivirus package for your Linux systems. If you’re willing to go paid (You should not as per my opinion) then you can find commercial antivirus in form of Panda Antivirus & Kaspersky Antivirus for Linux workstations.[/FONT]


----------



## Rahim (Oct 12, 2009)

^Funny one 

Paranoids will always be paranoid.


----------



## Rahim (Oct 13, 2009)

*Who is a Candidate for Desktop Linux?*
 Jason Perlow​
_I personally do not fall into the group of people that can easily migrate away from Windows, but that doesn’t mean you or someone you know can’t make the switch._

As I said in earlier post last month, the work that I do in my professional life requires that I still need to use Windows and various Microsoft and 3rd-party Win32 applications, even though I also use Linux. I also use various applications in my personal life that have no true functional Windows equivalents, so I have both Windows and Linux computers at home.

However, my situation is somewhat out of the ordinary. I don’t expect that most regular end-users have or need more than one personal computer at home or at work. Additionally, as an Information Technology professional and as a writer who covers the industry that I work in,  I choose to use multiple systems with different operating systems at home for educational purposes and also because I have a genuine curiosity about what is out there in both the Open Source and Microsoft-centric worlds. That’s not necessarily a realistic usage scenario for everyone.

There are certainly ideal groups of people who are capable of moving towards a 100 percent Open Source or Linux environment in both their professional and personal lives. I’m not really interested in discussing the political and ideological aspects or why someone would want to make that choice. That path been re-hashed over and over again and supplies far too much fodder for flame bait. Please take that into consideration when you submit a Talkback on this piece.

The greater and more important question is, who CAN switch to Linux? It should be noted that when I refer to groups of people here, I am for the most part excluding Information Technology professionals, Techies, digital content creation professionals, UNIX/Linux sysadmins and scientific academia who have much more sophisticated or specialized needs and may even be using Linux, the Mac and Windows and or a combination of these already.

As to WHICH Linux distribution any of these target users should be looking at, I am going to treat all of them equally and say that every single one of them will meet the basic usage requirements for the set of folks detailed below. For more information on Linux distributions, check out my Surviving the Recession with Free Linux Distributions roundup.

*The Lower Range:  The “Super-Casual Web Surfer”*

These users are 98% Web and email use, with very little else.  If they are using anything outside of a web-browser at all, it’s probably a rare case (they might use a game, if someone installed it for them).  And if they know what a “file” is, its more in the abstract.  To them a “file” is like an “inch”– it’s a unit of measurement and a description a lot of people use which doesn’t really apply to them.  It might as well be made of manila poster board for all they know, or care.

Senior Citizens are an especially big component of this group — although a surprising percentage of Seniors have become quite savvy in terms of social networking, bulletin boards, and messaging (although note that if in that sub-group, they’d likely be manipulating digital photos or videos, so they’d be in another “grouping” entirely).

These people for the most part don’t understand how computers work, and don’t really have to.  They’re probably perfect Linux candidates, without them even necessarily knowing WHAT Linux even is.

The Super-casual Web Surfer is also probably the same type of person who might be perfectly happy with a netbook or entry-level computer as his or her primary computing environment. They also do not use external devices with proprietary device driver and support software that doesn’t run on Linux. They have zero dependence on specialized, vertical-market applications. Virtually everything they do is web-oriented.


*The Lower-Mid-Range: The “Type a Letter” User*

These people do a TINY bit more than the Casual Web Users.

This group may occasionally use basic MS Office apps, but if they do its in a fairly unambitious manner for viewing purposes mostly, and they probably often have to be “babied” through any explanation more complex than “click on this”.  But many of these people wouldn’t know an Excel spreadsheet from a hole in their head.  They might dash out the rare letter to someone, but that’s about it.  They write it and print it.  They probably know how to save the file, but rarely if ever do they need to send that file AS electronic data to someone.

These users are Web-wise, but are NOT using services which require “file manipulation”, like Facebook (digital photos), iTunes (music files) or Blackberry (special hardware and software configuration).

These are the people who even this many years into the computer revolution still probably won’t understand how a hard drive works, what a “directory” is and how they are laid out and accessed, etc.  They use and manipulate files, but if those files weren’t conveniently stored in areas labeled “My Music”, “My Pictures” etc. they’d never be able to locate them.

Hardware-wise, these are probably the people who use whatever comes with a PC they buy and rarely if ever expand a system.

These would be good candidates for Linux, as long as they can manage to find where their documents are on a Linux system, and if they TRULY don’t exchange documents with anyone else (most people actually DON’T).


*The Middle of the Middle: The “Just Work, Dammit” User*

These people usually don’t want to be troubled.  They just want it to work.  Most of them could get away without any custom apps, although they DO want to be able to manipulate their photos, videos, etc.  So something easy has to exist to do that.

I know of a CEO of a major corporation valued in the 100 billion dollars and above range who fits in this category and uses Linux exclusively on his desktop with no Windows software whatsoever. He has five icons on it: Email, Web Browser, Word Processing, Spreadsheet, Presentation. He isn’t concerned about interoperability with Windows users.

In order of priority of what is critical to him, he sends email, he browses the web and he opens up and prints occasional productivity documents, which places him in this category of users that happens to be significant in size. There’s absolutely no reason why these types of users cannot use Linux instead of Windows.

End-users like this have pieces of understanding of what’s behind the computer they use. They’ll know what a file is and probably have a decent idea of how to manipulate through directories and drives, but its kind of a “soft” understanding which shouldn’t be challenged.  This stuff gives them headaches for the most part and the computer really IS just a tool for these folks.

They definitely use Office, and probably work in professions where they manipulate documents and might need them at home.  “True” Office probably still won’t be a concern for most of them (how many people REALLY work for Law Firms, Ad agencies or the Government?) but note that there would be a subset of them where that might apply.

Like my CEO example, these people may generate “soft” work products (word processing documents, spreadsheets, presentations, PDFs) that are going to be printed hardcopy, are usually not of a complex nature and/or do not need to be exchanged or originate from Windows/MS Office environments. The majority of home users fit into this category, as do high school and college students.

Note that to keep things easy to categorize, we’re not including iTunes users here.  We’re considering that a “custom app” and so putting people who use it into the next category.

If they buy additional hardware they’d tend to try and consult someone more technical first.

This group sometimes owns digital cameras and my need to edit or upload photos to services such as Flickr or Picasa Web Galleries, and they also own digital media players and other mobile devices which are not dependent on proprietary software or specialized device drivers that are unsupported in Linux by the originating vendor or by the Linux community.

These people CAN be good Linux candidates, but only if whoever is setting it up for them does a lot of checking in advance to make sure their usage is as casual as it seems (and also that they aren’t in the sub-group needing “True” Office).


*The Upper-Mid-Range: The “Set It and Forget It” User*

A slightly bigger jump.

These people probably DO install their own apps, although most of these folks still aren’t technical.  They just want things to WORK.  In years past, they probably bought programs in software stores, but today they likely grab software off the web, and not always wisely.

The “True Office” divide probably still exists. Some will need it but most won’t.  The real issue with people like this would be something like iTunes.  They want it, they want it to work, and they don’t want to think about workarounds or alternatives. If they depend on iTunes to buy their music, it may be a deal-breaker for Linux.

At work, beyond mere Office, they might also use BlackBerries or another mobile device they expect to be able to integrate at home, and they might even be handed a Citrix Remote Access install disk or a VPN client by their IT department and be expected to install it and use it at home.  They’ll tear their hair out if this doesn’t all work for them.

They’d tend to buy printers right from Costco or Staples and expect them to consistently work right out of the box.

These are often bad candidates for Linux, because they can be casual AND demanding in different ways as users.  They want things to be easy, and yet always work. They should probably stick with Windows and Mac.


*The Upper Range: The “Power User”*

They aren’t the experts by any means, but they are usually clever enough (or stubborn enough) to figure things out.  They’re likely to use any number of software and hardware devices, but by the same token might accept compromises.

If iTunes doesn’t work, they might accept that if they can kludge another solution but have everything else work better.  If they occasionally need “True” Office, they might be able to puzzle through something like CrossOver or VirtualBox, although not being techies they might need assistance.  If a specific piece of hardware doesn’t work on Linux, they’d have the savvy (and patience) to ask what comparable hardware would instead.

These people are good candidates for Linux in many cases, but might do better with Windows or even the Mac in others.  The benefits of the Linux config would have to outweigh the ones of the Windows or Mac config, although they’d be flexible and adjust either way.

Do you or someone you know fit into any of these categories? Talk Back and Let Me Know.

_Jonathan Lurie (jonathan.lurie@gmail.com) contributed to this article._


----------



## Krow (Oct 13, 2009)

Cool article by Cool G5! rahim, nice find.


----------



## thewisecrab (Oct 14, 2009)

_The Upper-Mid-Range: The “Set It and Forget It” User_
That's where I belong


----------



## amitabhishek (Oct 14, 2009)

Nice initiative Rahim Bhai. This will give much needed CPR to this section. I have not read the articles but will surely do in leisure. 

BTW where is NucleusKore uncle ? We are missing his tuts and news posts?


----------



## Rahim (Oct 14, 2009)

^Thanks 
But even I am feeling _disillusioned_ at the state of this section now. 
--------------------------------------------------------------------------------

*@wiseone:* I am trying to get into "The Power User" Category


----------



## Rahim (Oct 15, 2009)

*Five ways the Linux desktop shoots itself in the foot​*
I don't just write about the Linux desktop; I use it every day. At my desk, I tend to use MEPIS and Mint, while on the road, it's Ubuntu on my Dell netbook and openSUSE on my Lenovo ThinkPad. I do this because they work well and they're as safe as a desktop operating system can get. So why aren't more people using them? 

Microsoft is the biggest reason. Microsoft is a jealous monopoly that doesn't want to share the desktop with anyone. Desktop Linux is just another target in a long list that has included OS/2, DR-DOS, and -- that eternal thorn in their side -- the Mac. It's no surprise, then, to see in the history of the Linux desktop that Microsoft has always tried to crush it. 

The very first attempt at a mass-market Linux desktop, 1999's Corel Linux Desktop, lasted less than a year. Why? In 2000, Microsoft paid off debt-ridden Corel to kill it. 

Much more recently, Microsoft, caught by surprise by the rise of Linux-powered netbooks, brought XP Home back from the dead and offered it to OEMs (original equipment manufacturers) for next to nothing to stem Linux's rise on low-end netbooks. 

It's hard to beat a monopoly that will do whatever it takes to make sure people don't see there's a better, cheaper alternative. I understand that. At the same time, Linux has shot itself in the foot quite often. How? 

*1) Lack of Linux vendor support *

Every Linux distribution has a desktop version. But how many of them actively try to sell them? Not many. Red Hat is the number one Linux vendor, but makes its hundreds of millions from the server, not from the desktop. Canonical, Ubuntu's parent company, has arguably the most popular Linux desktop, but if you look closely, you'll see its hopes for making significant profits lie in server and cloud-based services. 

Only Novell, with SLED (SUSE Linux Enterprise Desktop), tries to make a real business out of the desktop. For everyone else, the desktop gets a lot of lip service, but it's not really part of their core business plans. 

*2) Lack of Linux advertising and marketing *

Companies like IBM and Oracle have made billions from Linux. Along the way, they've spent some advertising and marketing dollars on Linux. But neither they nor anyone else have spent more than pocket change on promoting the Linux desktop. 

Think about it. If you use the Linux desktop, chances are you're a techie who deliberately sought it out. Even now, most people have never even heard of Ubuntu, never mind any of the rest. 

*3) Too much bad techie attitude *

In 2009, any reasonably smart person can use any major Linux distribution without much trouble. You can run Linux without ever seeing a shell or manually tuning a conf file. But what if someone new does run into a problem with installing Adobe Flash and asks for help online? 

If he or she is lucky, they'll get a considerable and informative answer from an Ubuntu forum or LinuxQuestions. But all too often, I've seen such questions answered with responses like "RTFM you noob! What are you doing running that trash distro anyway! It's GNU/Linux, not Linux!" 

Yeah, that's going to encourage new users. If you don't have anything nice and informative to say to new Linux users, then don't say anything. Far too many Linux users seem to confuse acting superior and being rude with how people should act online. It's not. 

*4) Too much infighting *

In a little over a week, Windows 7 is coming out. So, what are hardcore Linux users doing to get ready for the coming of the next major threat to the Linux desktop? A lot of them are fighting about whether Miguel de Icaza, founder of the GNOME and the Mono implementation of .NET on Linux, is "a traitor to the Free Software community." 

This is just the latest chapter in the ongoing fight between free-software purists and open-source pragmatists. It's an obnoxious little war that's been flaming up over one personality or issue or another for ages now. I am so tired of this bickering — and more to the point, no one outside of certain developer circles cares. What does matter that is anyone from the outside looking in sees not a group of rational people working to create great systems, but a bunch of loonies fighting over ideological issues. 

While otherwise bright people continue to squabble, Microsoft keeps quietly gaining more mind-share and users every day. Good work team! 

*5) Not enough developer co-operation *

Back in 2005, a miracle happened. Linux desktop developers from feuding camps came together in the Portland Project and found out that, when they talked to each other face to face instead of screaming at each other over IRC (Internet Relay Chat), they had more in common than they ever would have believed. The result was a lot of useful cooperation between KDE and GNOME Linux developers. 

That's the good news. The bad news is, after two years of working together well, the programmers began drifting away again to work on their own little development islands. There are still efforts afoot to keep Linux desktop programming coordination going, but it's nothing as concrete as it once was. 

If Linux is to attract more ISV (independent software vendors) to make desktop programs, the desktop programmers must keep working on interoperability. No ISV wants to write one version of their program for Debian, another for Fedora, and yet another for openSUSE. If the Linux desktop developers keep wandering apart from each other, we'll lose those ISVs, like Adobe, that are willing to release some programs for Linux. That, in turn, will make desktop Linux less attractive to end-users. 

If Linux gets all these things right, will it stop the Windows desktop monopoly? Nope. But it will be a good start towards making desktop Linux more competitive. If nothing else, making sure that users always have a good, inexpensive alternative to Windows will always be a worthwhile goal.

Steven J. Vaughan-Nichols
Cyber Cynic


----------



## Cool G5 (Oct 15, 2009)

M$ k bhi bure din aayenge. Windows 7 a threat to Linux desktop? I don't think so!


----------



## thewisecrab (Oct 15, 2009)

Most Linux users us the various distros out of choice and their own free will. It would be wrong to say that Win7 will affect this attitude. Sure, there are many who actually rip the kernel apart and code as well, but for the lay man (like most of us, or more importantly me), it'll be the curiosity to check out and make something that's different to work that will drive their interest in Linux to a higher level. 

Win7 has nothing to do with it.


----------



## Rahim (Oct 16, 2009)

^Both of you should read into the various reasons given for Linux's problem and not just Win 7 is a threat. Linux will never be a threat to Windows if simple things are not ironed out. Linux developers and users, sometimes ,live in their own fish aquarium and doesn't want to accept their own catch-22 situation.

Linux Distros should be backed by more powerful companies, otherwise the independence of the developers, would actually harm the OS as they lacks a concrete focus and path. They give their services for free, agreed; but the devs are working too much away from each other.

But yes Linux is growing, albeit at snail's pace, but it has a long way to go from being a secondary OS in one's Grub menu, to being the only OS in one's system.

Heck, good articles are so hard to find


----------



## Krow (Oct 16, 2009)

Windows 7 is not a threat to linux desktop. It is a threat to vista and xp. Most people will still prefer to boot linux as a secondary OS just to keep viruses at bay or some like me who use linux while studying so as to keep away from games. 

Windows 7 is a threat to mass adoption of linux yes, but linux won't lose any of the users it has already. So many who move to linux as a secondary system stick to it. I doubt if linux will lose users due to Windows 7.

On the other hand, you guys have been nice to me in my ongoing learning process, so point 3 is invalid for here at least.


----------



## Cool G5 (Oct 16, 2009)

Techalomaniac said:


> Windows 7 is not a threat to linux desktop. It is a threat to vista and xp. Most people will still prefer to boot linux as a secondary OS just to keep viruses at bay or some like me who use linux while studying so as to keep away from games.
> 
> Windows 7 is a threat to mass adoption of linux yes, but linux won't lose any of the users it has already. So many who move to linux as a secondary system stick to it. I doubt if linux will lose users due to Windows 7.
> 
> On the other hand, you guys have been nice to me in my ongoing learning process, so point 3 is invalid for here at least.



Not a threat for XP definitely since MS is implementing it on netbooks & have even extended its support. So you can safely say it will be with us for a long time to come. Many organisations/firms have XP in their premises & overhauling all XP machines to Windows 7 is not needed especially when all their work is being done efficiently under XP. Not to mention business don't need all those multimedia & snazzy AERO effects.

Vista will surely be affected & will phase out. Regarding Linux, they have GNOME 3 desktop coming, KDE is now a matured product & then the ultra fast 10 secs bootup of Ubuntu & to follow on Fedora 12. It can't get better than this for the OSS.


----------



## Krow (Oct 16, 2009)

Plus there are no attitude problems on some Indian forums too. Well, XP will go I'm sure, but maybe 7 will become ordinary later on when no one uses x86 OS anymore.


----------



## Ratnadeep (Oct 22, 2009)

I recently used windows 7 RC. First one week or two was nice(because of looks). And then started typical windows problems as slow responses, viruses (and so on this). One more important thing, seeing my KDE desktop after watching Windows 7, one of my friend argued that it is windows 7 and not linux..
So after Mac, Netscape, it's linux (or already copied KDE) to be copied by M$ ??


----------



## Cool G5 (Oct 22, 2009)

Windows always is fast when one installs it without any software. But it becomes laggy upon use. I'm not saying its bad but MS should improve upon this front. Let's see today is a big day for MS, hope they atleast clean the bad impression left by windows vista.


----------



## Rahim (Oct 22, 2009)

I have used Windows for so long and you believe me, never ever was infested by a virus. So when people complain about viruses on Windows, I just think why that didn't happened to me.
Linux will always be in a nascent stage and thats not a bad thing


----------



## Krow (Oct 22, 2009)

Well, the W7 taskbar is a rip off of KDE and the Mac Dock. Otherwise it is running good for me. Yeah, viruses have not plagued me either since October 2007. Lin is much more hassle free to me at least. Some may say that it is tough to get used to and buggy, but I disagree. When I first learnt using Windows, I had faced many problems, just like I did when I first used Linux.


----------



## Ratnadeep (Oct 22, 2009)

Ya one can remain unaffected by viruses on windows with a genuine copy of antivirus which is frequently upgraded.. otherwise a windows installation can't survive more than 4 months...for me it never never never worked well above 3 months.. 3 months and complete reinstallion or repair or it's irritatingly slow..


----------



## Krow (Oct 22, 2009)

^I use AVG Free 8.5, which is good enough for me. Also, user stupidity is mostly the cause for viruses to enter the system. No need for paid antivirus software. But, updates are necessary.


----------



## j1n M@tt (Oct 22, 2009)

^^AVG Free edition is good, but it will only Quarantine viruses...in almost all cases it wont disinfect files which are already virus infected.

IMO use BitDefender or Kaspersky Internet security. They are only 600/- for 3 user licence.


----------



## Ratnadeep (Oct 22, 2009)

<font size="5">*<div align="center">Who Needs Windows 7 When You've Got KDE?</div>*</font>

As a devoted free software user, I'm almost as likely to stick my hand down a running garbarator as buy a copy of Windows 7. In fact, so far, I haven't tried Windows 7. But if its features list is any indication, I'm missing little that I don't already have with the latest version of the KDE desktop.

Of course, exactly what Windows 7's new features are can be difficult to tell. The features list is as much a marketing document as a technical one. In places it's more apt to give you an overdose of adjectives than any specifics. Nor is every feature available in every edition of Windows 7.

Then, too, a few listed features, such as 64-bit support, are so far from new that I wonder why they are mentioned.

Another difficulty is the sheer scope of the comparison. A desktop is a big place, and you can easily miss features because on one desktop they are part of a default installation and on another they are an option squirreled away beneath several layers of menus. 
Still, when such matters are taken into account, in terms of features, Windows 7 appears a minor upgrade at best. Judging from the advertising, it has no killer apps that outperform KDE, and its few unique features may turn out to be oddities rather than genuinely useful features.

Windows 7 bests KDE mainly in administrative tools, and even here the advantage is counter-balanced by standard features that KDE has had for years.
Desktop experiences

The most important feature that Windows 7 has and KDE lacks appears to be BitLocker, a utility for driver encryption. By contrast, while a feature for directory encryption is just being introduced in an unpolished form in Ubuntu, so far, no corresponding tool is standard with most distributions, let alone with KDE.

Otherwise, the difference on the desktop is slim. In fact in many cases, Windows 7 is just catching up to KDE. 
Translucencies? Animations? Thumbnail previews of applications on the taskbar? KDE already has them, although Windows 7 does add to the usability of previews by allowing you to view them full-screen.

The same goes for widgets -- or gadgets, as Windows 7 calls them. The feature lists boasts that these minor utilities are no longer confined to a taskbar and can now be placed anywhere on the desktop, but that's old news to KDE users.

Ditto for running applications from the taskbar. As for measurement conversions, the only difference is that KDE has been doing them in KRunner and Windows 7 does them in its calculator.

Move on to applications, and in many cases Windows 7 is still behind. Why would anyone consider the clutter of Windows Media Player when they could use the rich feature sets of Amarok or Digikam? Windows Media Player would have to be utterly transformed to compete seriously against applications that are the ultimate in their categories.

And use Internet Explorer instead of Firefox? Whether you are talking in terms of native features or the ecosystems of extensions built around them, Internet Explorer is barely in the running, especially if you want to do things exactly your way.

Admittedly, much is being made in Windows 7 reviews of Aero Peek, Aero Shake, and Snap.

Yet, despite all the attention they are receiving, these sound like small features: Peek turns all open windows translucent, so that you can see the desktop, while with Shake you can jiggle the mouse to make all except the active window disappear. Yet another solution for desktop chaos is embodied in Snap, which allows you to drag windows to the edges of the desktop to resize and position them.

While such features may astonish Windows users, for KDE users, these are only specific implementations of features that they already know -- translucencies, mouse gestures and hot spots on the edges.

The features may not be available for exactly the same purposes as in Windows 7, but they are recognizably the same technology -- for instance, you can make a window translucent as you move it to see what is beneath.

Nor are they the only ways to move and organize windows, as browsing through the wealth of settings in KDE's Windows Behavior settings soon proves. You might very well be better able to organize your desktop just as effectively without Peek, Shake, or Snap.

I suspect, too, that Shake and Snap in particular are going to alarm users who move the mouse in a careless moment and see their open windows disappear or change size. But even if that is not so, the point is that Windows 7 is advertising nothing on the desktop that KDE either does not have or could not easily add if anyone cared to make the effort.
Administration Tools versus Still-Missing Features

In some cases, such as wireless connections, Windows 7 and KDE offer almost identical tools. Still, there is no denying that, compared to earlier releases, the KDE 4.x series is still light in administration tools. Under these circumstances, it is not surprising that Windows 7 has parental controls and location-specific printing, but KDE does not seem to be even working on parental controls, and is scheduled to introduce geolocation features over the next couple of releases. 
However, you can borrow administration tools from GNOME if KDE lacks them, and, in its next release in January 2010, KDE should allow you to change not only printers according to settings, but icon sets and other desktop features as well.

In other cases, Windows 7's main advantage is more in the interface than in actual features. In particular, Windows 7 has a habit of slapping a wizard on top of routine tasks, such as configuring a printer, or accessing remote desktops.

For example, Windows Easy Transfer is a wizard to assist switching from an old computer to a new one. Similarly, Windows 7 includes a Getting Started window for configuring a new computer. Other examples of this kind of repackaging include Jump Lists (separate favorite lists for applications) and Libraries (collections of links on the desktop). While these tools are trivial refinements, you might legitimately argue that KDE would benefit from more such features for first-time users.

However, many such interfaces -- especially the administrative ones -- are likely to be used occasionally at best. For most users, I suspect they matter far less than the number of features in KDE that are still not standard in Windows. 
I am thinking now of features like multiple desktops and activities, or a multiple-entry clipboard for the entire desktop (MS Word has had one to itself for some time), or an external device manager. I'm thinking, too, of customization options for everything you can imagine, including three separate menu designs, and dozens of compositing effects.

While Windows 7 seems to have made some improvements in customization, such as configurable notifications, I see no signs that it has caught up to KDE. Although some of the areas in which KDE has the advantage are non-essential, I suspect they are far more important to many users than a lack of occasionally-used administrative tools.
Rival Desktops

I don't pretend that I am unbiased in this comparison. If nothing else, Windows 7's proprietary license would keep me away from it. But if that is all that you take from my comments, then you’ve missed the point.

The point is that, contrary to widespread belief, the free desktop is no longer struggling to equal its proprietary rivals. Instead, it is approximately equal and in some ways ahead. 
Yes, you can point to a genuine Windows advantage here and there. But you can also find examples where KDE had features first, or has superior ones. Nor, where Windows 7's advertised features are ahead of KDE's, do they have such a lead that KDE could not implement equivalent features almost immediately. I suspect that the same would be more or less true of GNOME, although I judge it slightly behind KDE in features.

Of course, I might change this opinion after actually using Windows 7. However, I don't think so. The point of advertising is to put the product in the best light, and, if the hype can't make Windows 7 enticing to a KDE user, I doubt that hands-on experience would do any better.

I don't know about anyone else, but I plan to celebrate Windows 7's release day acquainting myself with some of the lesser used features of KDE. Or maybe I'll mark the day by trying a completely different window manager, in recognition of the free desktop's diversity. Either way, I suspect I'll be making better use of my time than exploring Windows 7. 

By
Bruce Byfield
-----------------------------------------
Posted again:
-----------------------------------------
sry for that wrong title formatting..

do we need to take care of anything while copying reference articles here in this thread ? already given writer name though...
(m new here..)


----------



## FilledVoid (Oct 22, 2009)

> Windows 7 is a threat to mass adoption of linux yes, but linux won't lose any of the users it has already. So many who move to linux as a secondary system stick to it. I doubt if linux will lose users due to Windows 7.


False, Windows isn't a threat whatsoever. Its Linux itself  that is a threat. Reasons are quite clearly pointed above.


----------



## Krow (Oct 23, 2009)

^Yes, agreed. But, its too less used today, to be a threat as of now.


----------



## Cool G5 (Oct 23, 2009)

^Don't worry, its progressing slowly but surely. 

People have just heard the word "Linux" but most don't know what it means. We need to spread awareness amongst people to make it grow at a rapid pace.


----------



## azzu (Oct 23, 2009)

^ spreading awareness yes thats right
but i see people often think that Linux means sumthing that is only used by programmers and console oriented OS
we have to distribute linux (such as Puppyos) to spread awareness and show how user friendly and Graphically good the Linux can be


----------



## Cool G5 (Oct 23, 2009)

azzu said:


> ^ spreading awareness yes thats right
> but i see people often think that Linux means sumthing that is only used by programmers and console oriented OS
> we have to distribute linux (such as Puppyos) to spread awareness and show how user friendly and Graphically good the Linux can be



It started its journey as being a Geek's OS but it has now comfortably reached the level where any normal computer user can work with ease. You're right we should make them use Linux & then it won't take too long for them to decide whether to continue using it or not.


----------



## Krow (Oct 23, 2009)

Most people use PC's for merely browsing the net and playing occasional flash games. They work on word files and thats the toughest work they do on PC's. Such people are the primary targets for Linux usage as of now due to lack of major game support. For gamers, lin is but a secondary OS.


----------



## Ratnadeep (Oct 23, 2009)

do we need to take care of anything while copying reference articles from some blogs here in this thread ? In one above i already given writer name though...
(m new here..)


----------



## Rahim (Oct 23, 2009)

Welcome Ratnadeep 
It is better to mention the source of the Article. A little bit of formatting makes the article presentable and easy to read.
Hoping for more interesting articles from you.


----------



## Ratnadeep (Oct 30, 2009)

*Yes, Ubuntu can absolutely be the default Windows alternative*

And I don’t just mean for geeks. I mean a real, viable alternative to Windows for many users despite the apparent quality of both Windows 7 and Server 2008.


 About a year and a half ago, ZDNet’s Adrian Kingsley-Hughes asked, “Is Ubuntu becoming the generic Linux distro?” and concluded that “the evolution of Ubuntu into the generic Linux distro isn’t a bad thing”. Fair enough, but Canonical’s Mark Shuttleworth took this idea a bit farther during a press conference call yesterday:“We’ve already done a lot of work in developer ecosystem and we’re now increasingly interested in the non-developer consumer ecosystem, so that’s what all the OEM work is about,” Shuttleworth said, declaring that his focus was on “making sure that Ubuntu gets pre-installed and Ubuntu is available from Dell.com and others and making sure that Ubuntu is the default alternative to Windows.”​He didn’t mention Apple, which, to many consumers, is the _only_ alternative to Windows. For all its buzz in the tech world, Linux (or Ubuntu) is hardly a household word. Competing with Apple, though, which already has an impressive ecosystem of hardware and is the reigning king of usability, doesn’t make sense anyway and this ad from Novell would never fly outside of the tech community:

*www.youtube.com/watch?v=FDgEdcFTquM&feature=player_embedded

So how can I be so confident that Shuttleworth’s vision of becoming the “default alternative”, and not just the default Linux for those geeky enough to try it, will become a reality? Because he very clearly tied it to a vision of platform. If Ubuntu can work well on every device users encounter (including non-Intel smartbooks and other new classes of portable devices that will be emerging in the next couple of years, displacing notebooks for many consumers), then name recognition will follow.
 Obviously, the PC space is dominated by Windows. Yet no matter how spiffy Windows 7 is (and even Shuttleworth acknowledged that it was a good OS, worthy of competing with Ubuntu), Vista taught us all a lesson (consumers and techies alike). There are alternatives to the latest and greatest from Microsoft, even if that’s Windows XP. We don’t have to upgrade. 



 This “PC space” is changing, though.  Windows Mobile stinks.  Microsoft has no plans to develop Windows on ARM platforms.  The cloud is here, not because of the economy, but because of the value businesses perceive in it. Ubuntu is actively developing in all of these spaces and their latest, highly polished OS (available Thursday) shows off many of the technologies.


 What forced Microsoft to crank out it’s best OS in years (some might say it’s best ever and certainly the most stable prior to a service pack or two)? Competition. Competition from Apple, certainly, but also a growing awareness of open source concepts in general. Many artists are releasing DRM-free music (and still making money). Books are widely and freely available. Content is everywhere, much of it for free. Something that you pay for, then, like Windows, better be a heck of a lot better than its free alternatives. Competition is our friend, whether we’re consumers, pro users, or CIOs.

Microsoft may very well continue to dominate the desktop PC space. However, a quick look around at the variety of ways people access online content and cloud-based resources suggests that the importance of the desktop PC as we know it is diminishing. Ubuntu is ready to capitalize on that in ways that the average consumer won’t recognize until he or she finds him or herself using Ubuntu on a MID, a netbook, a kiosk, a phone, a virtualized OS, or a smartbook. Can Apple, Microsoft, or any other Linux distributor say that? Competition might be _our_ friend, but an ubiquitous platform is the friend of developers who can start creating the next generation of killer apps, easily ported to whatever screen we might be using.

Article by: Christopher Dawson
Original article here
-----------------------------------------
Posted again:
-----------------------------------------
A free Open-Source Magazine *www.opensourc3.org/
-----------------------------------------
Posted again:
-----------------------------------------
A free Open-Source Magazine *www.opensourc3.org/


----------



## Cool G5 (Oct 31, 2009)

Ubuntu indeed has the potential to be the killer OS to replace windows on home PC.
Given the ridiculous ease of use & huge community support, home users need not to worry about about getting in any problems.


----------



## drsubhadip (Nov 12, 2009)

ya .. ubuntu 9.10 is great..


----------



## Rahim (Dec 12, 2009)

*Yes, I Guess We Linux Fools Are Pretty Weird​*Dec 12, 2009, 00 :04 UTC 
Carla Schroder
Managing Editor
LinuxToday.com

I'm probably going to die poor because I don't believe in exploiting people, and I value a lot of things more than money. Of course I like making money, the more the better. It's great to be making enough to give me a bit of freedom, and to let me fulfill a lot of my dreams. I've been making my own way since I was a teenager, and I'm proud of being self-sufficient economically. When I was growing up girls were still being told to get married and have a man take care of them, which even as a youngun struck me as a bad deal for both parties. 

I believe that all actions need to flow from an ethical foundation. Mine is pretty simple: do unto others as you would be done by. I think that covers all the bases. (I'm not claiming 100% compliance! But I try.) The antithesis of that is "I won't do the right thing unless someone makes me." That seems to be the driving philosophy behind much of modern life. 

So that is why I rabble-rouse and do the things I do. I'm one person, but I still have to do whatever I can. "How many millions or billions can I acquire by any means" is a completely boring, meaningless question to me. It's nothing. Sure, it takes a lot of brains and ruthlessness to be a big time robber baron. So what? Robber barons are dull and unimaginative, following the same script in every generation: lie, cheat, steal, exploit, abuse, do whatever it takes to amass a great fortune and power. Then retire and practice pretend philanthropy, and carefully do not notice how other people are cleaning up the messes you made, at a greater cost than all of your contributions to your country or world economy. It's so predictable, and so useless. 

The tech boom has created a large number of millionaires and spawned any number of new businesses. It has also resulted in an astonishing number of bad things: wage and hour abuses (perma-temps, no overtime pay, shipping jobs overseas), the world economy hobbled by the costs of spam and malware (tens of billions of dollars per year by conservative estimates), a market dominated by crapware and little technical advancement, and worst of all, attacks on our civil rights and liberties. Progress! 

*Race to the Bottom*

It's not just the big-time robber barons, but all the way down the foodchain. I just know that someone is going to comment "But businesses care only about maximizing profits, otherwise shareholders will sue them and bad stuff like that." Please. Don't bother because it's garbage. It's excusing unethical behavior. Businesses are run by people with plenty of values, though sometimes the wrong ones. It's akin to saying that businesspeople must lie, cheat, and exploit because that is the only path to success. Hey everyone does it. 

Rip off the artists, musicians and creators because they're too stupid and weak to protect their own interests. Gouge the freelancers, abuse employees, rip off your own customers, buy yourself favorable legislation. I don't call success that comes at the expense of damaging other people success. That is failure. 


*The Race to Generosity*

I wish more pundits, bloggers, analysts, and tech reporters would comment more on the astonishing generosity that is the basis of Linux and FOSS. They go on about free-of-cost, and take cheap shots at the "religious zealot fanatics." Thanks a lot, you're welcome. We need to take a break from arguing with each other to thank and honor all the thousands of hardworking talented contributors who give away their work. Sure, they receive compensation in the form of code, developer tools, documentation, artwork, polished Linux distributions that wrap it all up in a friendly, useful package, Linux-friendly vendors who put together pre-installs in all form factors. Many contributors get paid. But that doesn't make the act of generosity any less meaningful. It is worth noting, honoring, and celebrating. And hopefully proliferating. So-- thank you!


----------



## Rahim (Mar 29, 2010)

*Desktop Linux: An Average User Success Story*​
Mar 26th, 2010 by Gene.​
I often see the sentiment expressed that desktop Linux is “too hard” for the average PC user. Yet the qualification for “too hard” is usually that it is too hard to install Linux or too hard to fix problems on Linux for the average user. These arguments seem to completely overlook the fact that an average PC user will never install his own operating system. Also overlooked is the fact that the average PC user will never diagnose and fix her own system. An average PC user is taking a “sick” PC to a local computer repair shop, or to Geek Squad at Best Buy or calling a geek friend to come fix it. An average PC user is buying a PC with an operating system preinstalled and not changing it for something else. Those average PC users would have zero problems using desktop Linux. I have proof.

I am no average computer user. I run a computer consulting and sales business and I steep my brain in computer related news, technical documents and computer trivia on a daily basis. I am the guy that people call on when they do have computer problems or are looking to buy a new PC customized just for them. The fact that I use desktop Linux every day to run my business and for personal use is not remarkable.

On the other hand my friend Chuck is an average computer user. Chuck needs to send and receive e-mail, use Flash based web sites, connect and copy music to his MP3 player, create and print documents, use Instant Messaging to talk to friends and play a few games to pass the time. Chuck does all this on Mandriva Linux and has done so ever since I built him a PC with Mandrake Linux, now known as Mandriva, preinstalled in 2004. When Chuck needs to upgrade Mandriva he calls me and pays me to do it, he does not do it himself. When Chuck has hardware problems he calls me and pays me to fix the PC, he does not do that himself. This is what average PC users do.

Chuck is my average user desktop Linux success story. He has been so for about six years now. Chuck does not want to go back to Microsoft operating systems as he sees no benefit to that. He does see some negatives to going back though. He would have to go back to buying and installing anti-malware software and keeping that up to date. He would have to go back to worrying about malware infections through e-mail or cracked web sites. Certainly if Chuck were using a Microsoft operating system I would do all I could to secure his PC for him. But I could not guarantee Chuck would never get malware “owning” his PC in that case. I am not there to watch over Chuck every time he opens an e-mail or browses web sites. With desktop Linux Chuck and I both know that he does not have to worry about those problems. Chuck is happy to use Linux as an average PC user.

I asked Chuck today, after finishing upgrading his PC to Mandriva 2010, if he considers himself an average PC user. He did not understand the context so I explained what I meant. Chuck agreed that he would never attempt to install his own operating system nor would he attempt to solve problems on his PC himself. He would call an expert for those every time. Just like he calls on an expert when he needs his home sprayed to prevent infestations of termites. Just like he calls on an expert when his SUV needs an oil change, new tires or some repair done. Chuck is very much an average PC user. Yet, Chuck uses desktop Linux on his home PC every day to do the things he needs to do. I asked Chuck if using Linux is hard. The answer? “No”.

Source


----------



## Krow (Mar 29, 2010)

Amazing write up. Linux is pretty easy actually. The only hard part is to stop expecting it to work the Windows way. Other than that there is the mental block. 





> Many people say, better live with viruses than use the hi-fi complex linux, which only hackers use



Respect it as an OS and it is very easy. Try to find Windows in it everywhere, it will be very hard. Many things are actually better in Linux, as compared to Windows.


----------



## celldweller1591 (Mar 31, 2010)

I think linux has been constantly misunderstood as only a shell based OS as a lot has changes in last so many years with the UI in linux DEs. Thing is that people dont want to try anything else as they are so fed up with their windows xp/vista crashes that now they are afraid to try anything new .Moreover, due to lack of directx in linux, windows take advantage at gaming . A huge gaming community is out of reach to the linux community. however, Linux platform too has some really good games like UrT (Urban terror), W:ET(Wolfenstein : Enemy territory) , TC:E (True Combat Elite ) ,Nexiuz  and Torcs but still there is a long way to go.


----------



## Rahim (Apr 5, 2010)

*12 More of the Best Free Linux Books*
Part 1

Many computer users have an insatiable appetite to deepen their understanding of computer operating systems and computer software. Linux users are no different in that respect. At the same time as developing a huge range of open source software, the Linux community fortunately has also written a vast range of documentation in the form of books, guides, tutorials, HOWTOs, man pages, and other help to aid the learning process. Some of this documentation is intended specifically for a newcomer to Linux, or those that are seeking to move away from a proprietary world and embrace freedom.

There are literally thousands of Linux books which are available to purchase from any good (online) book shop. However, the focus of this article is to highlight champion Linux books which make an invaluable contribution to learning about Linux, and which are also available to download without charge.

We have tried to select a fairly diverse selection of books in this article so that there should be something of interest here for any type of user whatever their level of computing knowledge. This article should be read in conjunction with our previous article on free Linux books, entitled 20 of the Best Free Linux Books.

*1. GNU/Linux Advanced Administration
Website	]ftacademy.org
Author	Remo Suppi Boldrito, Josep Jorba Esteve
*Format	PDF
Pages	545


_We start of this article with a book that is exhaustive in its treatment of system administration. This book examines many different areas involved in administering Linux systems, with each subject being accompanied by a tutorial to act as an aid in the learning process.
_
*Topics covered include:*
Introduction to Linux
Migration and coexistence with non-Linux systems
Basic tools for the administrator
The kernel
Local administration
Network administration
Server administration
Data administration
Security administration
Configuration, tuning and optimisation
Clustering


*2. Using Samba
Website	oreilly.com
Author	Robert Eckstein, David Collier-Brown, Peter Kelly*
Format	PDF, HTML
Pages	416


_Samba is a suite of tools for sharing resources such as printers and files across a network. Samba uses the Server Message Block (SMB) protocol, which is endorsed by Microsoft and IBM, to communicate low-level data between Windows clients and Unix servers on a TCP/IP network. 
_
*It is one of the most important software to bridge the open source and closed source worlds. 
*

The book focuses on two different areas: 
Installation including configuring Windows clients
Configuration and optimization exploring areas such as disk shares, browsing and advanced disk shares, set up users, printer and Windows Internet Naming Service setup with Samba, and troubleshooting tips


*3. Slackware Linux Basics
Website	www.slackbasics.org
Author	Daniël de Kok
*Format	PDF, HTML, Single page HTML
Pages	233


_Slackware Linux Basics is a book that aims to provide an introduction to Slackware Linux. It targets people who have little or no GNU/Linux experience. It aims to cover the Slackware Linux installation, basic Linux commands and the configuration of Slackware Linux.

Slackware is one of the earliest Linux distributions with development commencing in 1993.
_
*Covers:* 
Installation including partitioning and custom installation
Basic essential information such as the shell, files and directories, text processing, process management, editing and typesetting, and electronic mail
System administration covering topics such as user management, printer configuration, X11, package management, building a kernel, system initialization, and security
Network administration focusing on network configuration, IPsec, the Internet super server, Apache, and BIND


*4. Advanced Bash Scripting Guide
Website	www.tldp.org
Author	Mendel Cooper
*Format	PDF, HTML
Pages	945


_Advanced Bash-Scripting Guide is an in-depth exploration of the art of scripting. Almost the complete set of commands, utilities, and tools is available for invocation by a shell script.
_
*The book explains: *

Basics such as special characters, quoting, exit and exit status
Beyond the Basics including loops and branches, command substitution, arithmetric expansion, recess time
Commands - Internal commands and builtins; External filters, programs and commands; System and Administrative Commands
Advanc
ed topics: Regular Expressons, Here Documents, I/O Redirection, Subshells, Restricted Shells, Process Substitution, Functions, Aliases, List Constructs, Arrays, Indirect References, /dev and /proc, Of Zeros and Nulls, Debugging, Options, Gotchas, Scripting with Style

Part 2 
Part 3


----------



## celldweller1591 (Apr 5, 2010)

Thanks for the links . But the slackware basics guide link is dead i guess !


----------



## Rahim (Apr 5, 2010)

^No its not 
PDF


----------



## Rahim (Apr 8, 2010)

How it works: Linux audio explained​Tux Radar​
There's a problem with the state of Linux audio, and it's not that it doesn't always work. The issue is that it's overcomplicated. This soon becomes evident if you sit down with a piece of paper and try to draw the relationships between the technologies involved with taking audio from a music file to your speakers: the diagram soon turns into a plate of knotted spaghetti. This is a failure because there's nothing intrinsically more complicated about audio than any other technology. It enters your Linux box at one point and leaves at another. 

If you've had enough of this mess and want to understand just how all the bits fit together, we're here to help - read on to learn exactly how Linux audio works!

If we were drawing the OSI model used to describe the networking framework that connects your machine to every other machine on the network, we'd find clear strata, each with its own domain of processes and functionality. There's very little overlap in layers, and you certainly don't find end-user processes in layer seven messing with the electrical impulses of the raw bitstreams in layer one.

Yet this is exactly what can happen with the Linux audio framework. There isn't even a clearly defined bottom level, with several audio technologies messing around with the kernel and your hardware independently. Linux's audio architecture is more like the layers of the Earth's crust than the network model, with lower levels occasionally erupting on to the surface, causing confusion and distress, and upper layers moving to displace the underlying technology that was originally hidden.

The Open Sound Protocol, for example, used to be found at the kernel level talking to your hardware directly, but it's now a compatibility layer that sits on top of ALSA. ALSA itself has a kernel level stack and a higher API for programmers to use, mixing drivers and hardware properties with the ability to play back surround sound or an MP3 codec. When most distributions stick PulseAudio and GStreamer on top, you end up with a melting pot of instability with as much potential for destruction as the San Andreas fault.

*ALSA*

*INPUTS: PulseAudio, Jack, GStreamer, Xine, SDL, ESD
OUTPUTS: Hardware, OSS
*
As Maria von Trapp said, "Let's start at the very beginning." When it comes to modern Linux audio, the beginning is the Advanced Linux Sound Architecture, or ALSA. This connects to the Linux kernel and provides audio functionality to the rest of the system. But it's also far more ambitious than a normal kernel driver; it can mix, provide compatibility with other layers, create an API for programmers and work at such a low and stable latency that it can compete with the ASIO and CoreAudio equivalents on the Windows and OS X platforms. 

ALSA was designed to replace OSS. However, OSS isn't really dead, thanks to a compatibility layer in ALSA designed to enable older, OSS-only applications to run. It's easiest to think of ALSA as the device driver layer of the Linux sound system. Your audio hardware needs a corresponding kernel module, prefixed with snd_, and this needs to be loaded and running for anything to happen. This is why you need an ALSA kernel driver for any sound to be heard on your system, and why your laptop was mute for so long before someone thought of creating a driver for it. Fortunately, most distros will configure your devices and modules automatically. 

ALSA is responsible for translating your audio hardware's capabilities into a software API that the rest of your system uses to manipulate sound. It was designed to tackle many of the shortcomings of OSS (and most other sound drivers at the time), the most notable of which was that only one application could access the hardware at a time. This is why a software component in ALSA needs to manages audio requests and understand your hardware's capabilities.

If you want to play a game while listening to music from Amarok, for example, ALSA needs to be able to take both of these audio streams and mix them together in software, or use a hardware mixer on your soundcard to the same effect. ALSA can also manage up to eight audio devices and sometimes access the MIDI functionality on hardware, although this depends on the specifications of your hardware's audio driver and is becoming less important as computers get more powerful.

Where ALSA does differ from the typical kernel module/device driver is in the way it's partly user-configurable. This is where the complexity in Linux audio starts to appear, because you can alter almost anything about your ALSA configuration by creating your own config file - from how streams of audio are mixed together and which outputs they leave your system from, to the sample rate, bit-depth and real-time effects.

ALSA's relative transparency, efficiency and flexibility have helped to make it the standard for Linux audio, and the layer that almost every other audio framework has to go through in order to communicate with the audio hardware.


*PulseAudio*

*INPUTS: GStreamer, Xine, ALSA
OUTPUTS: ALSA, Jack, ESD, OSS
*
If you're thinking that things are going to get easier with ALSA safely behind us, you're sadly mistaken. ALSA covers most of the nuts and bolts of getting audio into and out of your machine, but you must navigate another layer of complexity. This is the domain of PulseAudio - an attempt to bridge the gap between hardware and software capabilities, local and remote machines, and the contents of audio streams. It does for networked audio what ALSA does for multiple soundcards, and has become something of a standard across many Linux distros because of its flexibility.

As with ALSA, this flexibility brings complexity, but the problem is compounded by PulseAudio because it's more user-facing. This means normal users are more likely to get tangled in its web. Most distros keep its configuration at arm's length; with the latest release of Ubuntu, for example, you might not even notice that PulseAudio is installed. If you click on the mixer applet to adjust your soundcard's audio level, you get the ALSA panel, but what you're really seeing is ALSA going to PulseAudio, then back to ALSA - a virtual device.

At first glance, PulseAudio doesn't appear to add anything new to Linux audio, which is why it faces so much hostility. It doesn't simplify what we have already or make audio more robust, but it does add several important features. It's also the catch-all layer for Linux audio applications, regardless of their individual capabilities or the specification of your hardware.

If all applications used PulseAudio, things would be simple. Developers wouldn't need to worry about the complexities of other systems, because PulseAudio brings cross-platform compatibility. But this is one of the main reasons why there are so many other audio solutions. Unlike ALSA, PulseAudio can run on multiple operating systems, including other POSIX platforms and Microsoft Windows. This means that if you build an application to use PulseAudio rather than ALSA, porting that application to a different platform should be easy.

But there's a symbiotic relationship between ALSA and PulseAudio because, on Linux systems, the latter needs the former to survive. PulseAudio configures itself as a virtual device connected to ALSA, like any other piece of hardware. This makes PulseAudio more like Jack, because it sits between ALSA and the desktop, piping data back and forth transparently. It also has its own terminology. Sinks, for instance, are the final destination. These could be another machine on the network or the audio outputs on your soundcard courtesy of the virtual ALSA. The parts of PulseAudio that fill these sinks are called 'sources' - typically audio-generating applications on your system, audio inputs from your soundcard, or a network audio stream being sent from another PulseAudio machine.

Unlike Jack, applications aren't directly responsible for adding and removing sources, and you get a finer degree of control over each stream. Using the PulseAudio mixer, for example, you can adjust the relative volume of every application passing through PulseAudio, regardless of whether that application features its own slider or not. This is a great way of curtailing noisy websites.


*GStreamer*

*INPUTS: Phonon
OUTPUTS: ALSA, PulseAudio, Jack, ESD*

With GStreamer, Linux audio starts to look even more confusing. This is because, like PulseAudio, GStreamer doesn't seem to add anything new to the mix. It's another multimedia framework and gained a reasonable following of developers in the years before PulseAudio, especially on the Gnome desktop. It's one of the few ways to install and use proprietary codecs easily on the Linux desktop. It's also the audio framework of choice for GTK developers, and you can even find a version handling audio on the Palm Pre. 

GStreamer slots into the audio layers above PulseAudio (which it uses for sound output on most distributions), but below the application level. GStreamer is unique because it's not designed solely for audio - it supports several formats of streaming media, including video, through the use of plugins.

MP3 playback, for example, is normally added to your system through an additional codec download that attaches itself as a GStreamer plugin. The commercial Fluendo MP3 decoder, one of the only officially licenced codecs available for Linux, is supplied as a GStreamer plugin, as are its other proprietary codecs, including MPEG-2, H.264 and MPEG.


*Jack*

*INPUTS: PulseAudio, GStreamer, ALSA, 
OUTPUTS: OSS, FFADO, ALSA*

Despite the advantages of open configurations such as PulseAudio, they all pipe audio between applications with the assumption that it will proceed directly to the outputs. Jack is the middle layer - the audio equivalent of remote procedure calls in programming, enabling audio applications to be built from a variety of components.

The best example is a virtual recording studio, where one application is responsible for grabbing the audio data and another for processing the audio with effects, before finally sending the resulting stream through a mastering processor to be readied for release. A real recording studio might use a web of cables, sometimes known as jacks, to build these connections. Jack does the same in software.

Jack is an acronym for 'Jack Audio Connection Kit'. It's built to be low-latency, which means there's no undue processing performed on the audio that might impede its progress. But for Jack to be useful, an audio application has to be specifically designed to handle Jack connections. As a result, it's not a simple replacement for the likes of ALSA and PulseAudio, and needs to be run on top of another system that will generate the sound and provide the physical inputs.

With most Jack-compatible applications, you're free to route the audio and inputs in whichever way you please. You could take the output from VLC, for example, and pipe it directly into Audacity to record the stream as it plays back. 

Or you could send it through JackRack, an application that enables you to build a tower of real-time effects, including pinging delays, cavernous reverb and voluptuous vocoding. 

This versatility is fantastic for digital audio workstations. Ardour uses Jack for internal and external connections, for instance, and the Jamin mastering processor can only be used as part of a chain of Jack processes. It's the equivalent of having full control over how your studio is wired. Its implementation has been so successful on the Linux desktop that you can find Jack being put to similar use on OS X.

*FFADO*

*INPUTS: Jack
OUTPUTS: Audio hardware*

In the world of professional and semi-professional audio, many audio interfaces connect to their host machine using a FireWire port. This approach has many advantages. FireWire is fast and devices can be bus powered. Many laptop and desktop machines have FireWire ports without any further modification, and the standard is stable and mostly mature. You can also take FireWire devices on the road for remote recording with a laptop and plug them back into your desktop machine when you get back to the studio.

But unlike USB, where there's a standard for handling audio without additional drivers, FireWire audio interfaces need their own drivers. The complexities of the FireWire protocol mean these can't easily create an ALSA interface, so they need their own layer. Originally, this work fell to a project called FreeBOB. This took advantage of the fact that many FireWire audio devices were based on the same hardware. 

FFADO is the successor to FreeBOB, and opens the driver platform to many other types of FireWire audio interface. Version 2 was released at the end of 2009, and includes support for many units from the likes of Alesis, Apogee, ART, CME, Echo, Edirol, Focusrite, M-Audio, Mackie, Phonic and Terratec. Which devices do and don't work is rather random, so you need to check before investing in one, but many of these manufacturers have helped driver development by providing devices for the developers to use and test.

Another neat feature in FFADO is that some the DSP mixing features of the hardware have been integrated into the driver, complete with a graphical mixer for controlling the balance of the various inputs and outputs. This is different to the ALSA mixer, because it means audio streams can be controlled on the hardware with zero latency, which is exactly what you need if you're recording a live performance.

Unlike other audio layers, FFADO will only shuffle audio between Jack and your audio hardware. There's no back door to PulseAudio or GStreamer, unless you run those against Jack. This means you can't use FFADO as a general audio layer for music playback or movies unless you're prepared to mess around with installation and Jack. But it also means that the driver isn't overwhelmed by support for various different protocols, especially because most serious audio applications include Jack support by default. This makes it one of the best choices for a studio environment. 

*Xine*

*INPUTS: Phonon
OUTPUTS: PulseAudio, ALSA, ESD*

We're starting to get into the niche geology of Linux audio. Xine is a little like the chalk downs; it's what's left after many other audio layers have been washed away. Most users will recognise the name from the very capable DVD movie and media player that most distributions still bundle, despite its age, and this is the key to Xine's longevity. 

When Xine was created, the developers split it into a back-end library to handle the media, and a front-end application for user interaction. It's the library that's persisted, thanks to its ability to play numerous containers, including AVI, Matroska and Ogg, and dozens of the formats they contain, such as AAC, Flac, MP3, Vorbis and WMA. It does this by harnessing the powers of many other libraries. As a result, Xine can act as a catch-all framework for developers who want to offer the best range of file compatibility without worrying about the legality of proprietary codecs and patents.

Xine can talk to ALSA and PulseAudio for its output, and there are still many applications that can talk to Xine directly. The most popular are the Gxine front-end and Totem, but Xine is also the default back-end for KDE's Phonon, so you can find it locked to everything from Amarok to Kaffeine.


*Phonon*

*INPUTS: Qt and KDE applications
OUTPUTS: GStreamer, Xine*

Phonon was designed to make life easier for developers and users by removing some of the system's increasing complexity. It started life as another level of audio abstraction for KDE 4 applications, but it was considered such a good idea that Qt developers made it their own, pulling it directly into the Qt framework that KDE itself is based on.

This had great advantages for developers of cross-platform applications. It made it possible to write a music player on Linux with Qt and simply recompile it for OS X and Windows without worrying about how the music would be played back, the capabilities of the sound hardware being used, or how the destination operating system would handle audio. This was all done automatically by Qt and Phonon, passing the audio to the CoreAudio API on OS X, for example, or DirectSound on Windows. On the Linux platform (and unlike the original KDE version of Phonon), Qt's Phonon passes the audio to GStreamer, mostly for its transparent codec support. 

Phonon support is being quietly dropped from the Qt framework. There have been many criticisms of the system, the most common being that it's too simplistic and offers nothing new, although it's likely that KDE will hold on to the framework for the duration of the KDE 4 lifecycle. 


*The rest of the bunch*

There are many other audio technologies, including ESD, SDL and PortAudio. ESD is the Enlightenment Sound Daemon, and for a long time it was the default sound server for the Gnome desktop. Eventually, Gnome was ported to use libcanberra (which itself talks to ALSA, GStreamer, OSS and PulseAudio) and ESD was dropped as a requirement in April 2009. Then there's Arts, the KDE equivalent of ESD, although it wasn't as widely supported and seemed to cause more problems than it solved. Most people have now moved to KDE 4, so it's no longer an issue.

SDL, on the other hand, is still thriving as the audio output component in the SDL library, which is used to create hundreds of cross-platform games. It supports plenty of features, and is both mature and stable. 

PortAudio is another cross-platform audio library that adds SGI, Unix and Beos to the mix of possible destinations. The most notable application to use PortAudio is the Audacity audio editor, which may explain its sometimes unpredictable sound output and the quality of its Jack support. 

And then there's OSS, the Open Sound System. It hasn't been a core Linux audio technology since version 2.4 of the kernel, but there's just no shaking it. This is partly because so many older applications are dependent on it and, unlike ALSA, it works on systems other than Linux. There's even a FreeBSD version. It was a good system for 1992, but ALSA is nearly always recommended as a replacement. 

OSS defined how audio would work on Linux, and in particular, the way audio devices are accessed through the ioctl tree, as with /dev/dsp, for example. ALSA features an OSS compatibility layer to enable older applications to stick to OSS without abandoning the current ALSA standard. 

The OSS project has experimented with open source and proprietary development, and is still being actively developed as a commercial endeavour by 4 Front Technologies. Build 2002 of OSS 4.2 was released in November 2009.


----------



## Rahim (Apr 10, 2010)

*Linux's worst enemies? Linux fans​*Steven J. Vaughan-Nichols​*blogs.computerworld.com/15906/linuxs_worst_enemies_linux_fans

Do you know why Unix failed to take off as a mainstream operating system? It wasn't because it was too hard to use. Mac OS X, the universally acclaimed 'easy' operating system is built on top of BSD Unix. It's certainly not because Windows is better. It wasn't and it isn't. No, I put most of the blame for Unix's failure on its internal wars. Unix International vs. Open Software Foundation; BSD vs. System V, etc., etc. For the most part, Linux has avoided this.... for the most part. 

That's not to say that Linux doesn't have its share of internal battles that don't do anyone any good. Free software founder Richard M. Stallman's insistence that Linux should be called GNU/Linux puzzles more people than it does bringing anyone to Linux, or GNU/Linux if you insist. In the last few days though, another Linux family fight has erupted. 

This time around, it's open-source developer and anti-patent political lobbyist Florien Mueller accusing IBM of breaking its promises to the FOSS (free and open-source software) community of not using patents against it. Mueller's is ticked off that TurboHercules, an open-source z/OS emulator company, over its possible misuse of IBM patents, which includes two that's covered by IBM's pledge to not sue open-source companies or groups using these patents. 

I have several problems with this. First, as Pamela Jones of Groklaw points out, TurboHercules started the legal fight with IBM and the open-source software license it uses isn't compatible with the GPL--the license that covers Linux. Second, this is really just a standard-issue business fight that involves patents. It does not, as Mueller would have it, show that "After years of pretending to be a friend of Free and Open Source Software (FOSS), IBM now shows its true colors. IBM breaks the number one taboo of the FOSS community and shamelessly uses its patents against a well-respected FOSS project, the Hercules mainframe emulator." 

Come on! IBM has been one of Linux's biggest supporters for over a decade. Why do they support Linux and open source? Ah, would that be because they've invested billions it and they've made even more billions from it? I think so. Does a minor legal-clash with an obscure company means IBM is now a traitor to the cause, and, more importantly, that they've abandoning a wildly successful busiess plan? I can't see it. 

To me, this is all a piece with Debian's moronic fight with Mozilla which ended up with Debian re-naming Firefox, IceWeasel, in its Linux distribution; Red Hat's 'betrayal' of Linux by abandoning the consumer side of Linux for RHEL (Red Hat Enterprise Linux); and the never-ending Debian vs. Ubuntu fights. 

There's this love of ideological purity that burns in the heart of too many open-source fans that makes them require companies and groups to pass litmus tests before they can approve of them. No matter, for example, that Novell has carried the burden of fighting off SCO's anti-Linux claims, Novell partnered with Microsoft, therefore, Novell must be boycotted! 

I'm sorry people. We don't live in a black and white world, nor, for that matter one filled with shades of gray. It's one filled with a multitude of colors and business and ethical choices aren't binary. 

I'm not the only one who sees it this way. I recently talked to analysts and executives about the IBM/TurboHercules patent mess and they agreed with me that these fights only end up hurting open source and Linux. 

You know, we've been here before. The one real winner when the Unix companies slugged it out was Microsoft. Why would anyone think that turning Linux into dueling fiefdoms arguing over who has betrayed open-source last is going to help anyone except Microsoft and other pure proprietary companies? 

Finally, don't you think it more than a little interesting that the other 'open-source' companies, which had attacked IBM on similar grounds in Europe, counted Microsoft among their stock-owners? Coincidence? I think not. Might I suggest those attacking IBM take a long, hard look at what they're really doing and which side of the open-source debates they're really on.

Last, but not least, might I suggest that anyone who thinks that extremism for one side or another in the various open-source debates is a virtue contemplate this classic John Cleese video.


----------



## Rahim (Apr 12, 2010)

*50 Places Linux is Running That You Might Not Expect*


----------



## Cool G5 (Apr 12, 2010)

a_rahim said:


> *50 Places Linux is Running That You Might Not Expect*



Very Nice. Posting it to my Facebook Linux Page.


----------



## Rahim (Apr 12, 2010)

^It had too many pictures, so didn't post the article here for Digit Bandwidth reason  and not to forget my novice level when it comes to formatting


----------



## Rahim (Apr 16, 2010)

*The trouble with Linux: it's just not Sexy*
*iPad painfully illustrates this massive divide*

           Graham Morrison
​ 
*cdn.mos.techradar.com//classifications/computing/software/operating-systems/images/linux-penguin-218-85.jpg     ​ 
                                                                                                                                            If Linux is to gain some market share this year, then it will need to pull off some magic above and beyond the competition

There are three reasons why Linux isn't succeeding on the desktop, and none of them are to do with missing functionality, using the command line or the politics of free software. 

The first is that there's too much momentum behind Microsoft Windows and too many preconceptions about the alternatives. Linux is perceived as having too much of a learning curve for relatively few advantages and an unknown heritage.

Migrating big business to a Linux desktop is akin to turning a T1-class supertanker around mid-Atlantic. The opposite direction may look brighter, but it's easier to chug onwards into the storm. You only have to look at the number of people clinging to Microsoft's venerable Office suite to see this point clearly. 

For the vast majority, most of its functional fecundity is wasted. Many people could arguably be just as (un)productive with Notepad, Calculator and Paint, let alone using an open-source alternative such as OpenOffice.org. Its use seems to have more to do with keeping face when attaching files to an email than a genuine operational advantage. 

Most people will only consider an alternative when there are bigger issues, larger icebergs or uncertain territories on the horizon, Away from the desktop, Linux is faring better. 

Smaller, more agile businesses quickly quantify the cost advantages to produce cheaper and more competitive products. This is why embedded Linux has been such a success on everything from Chinese mobile phones to almost every NAS box around. This may mean that success on the desktop is only a matter of time, or it may mean that the Linux desktop is too far removed from the Linux kernel. 

The second reason for failure is that Linux lacks centralised marketing. This is because there's no real Linux Central. It's just a trademark owned by its creator, Linus, and a term normally reserved for just the kernel of the operating system – hardly the easiest product to sell. 

There are plenty of people advertising their own Linux endeavours, all keen to push their own angle on its advantages. This divided effort compounds the problem. With the likes of Red Hat, Novel and Canonical all fighting for their own slice of the pie, there's no one left to push Linux as a distinctive brand. That's something Apple and Microsoft do extremely well, and something Linux leaves to Tux the penguin. 

Many would argue that standards are the answer to this conundrum, and that would mean a single base distribution. This could then be the only distribution called 'Linux' - everything else would become 'Linux based'. 

Mozilla manages this well with the use of the Firefox brand. It's freely distributable and modifiable, but it can only be called 'Firefox' in its untouched incarnation. Change anything and you need to change the name. 

For example, Debian calls its Firefox build 'IceMonkey' because it needs to reserve the right to make modifications, thus breaking Mozilla's standards. This may cause confusion if you look for Firefox on your Debian desktop, but it also sets a precedent for the kind of experience that Mozilla expects its users to have, and Debian hackers still have the code to mess around with if they need to. It's a compromise, but it might work in a world with hundreds of Linux distros. 

The third reason is easy to see but harder to solve. It's the reason why you're not using Linux now. The solution would make all other problems redundant. The reason why you're not using Linux now is because there isn't a good enough reason to. 

Sober advantages such as better security, improved performance, rock solid stability and low cost aren't going to win converts. These advantages aren't exciting enough; they're the equivalent of a spreadsheet of mortgage repayments. What we really want is a significant upgrade, something you'd normally pay for. 

Perhaps we should focus on value. Recent analysis of the kernel by Jon Corbet showed that 75 per cent of the 2.8 million lines of code in recent contributions were written by paid-for developers. That puts Linux freedom in context. 

But the biggest challenge is sexiness. There's very little of it in Linux unless you're an antisocial geek, and products like the Apple's iPad illustrate this massive divide painfully. As Jim Zemlin, Executive Director of the Linux Foundation, puts it, "Linux can compete with the iPad on price, but where's the magic?" 

Linux has the programmers, the managers, the community, the innovation, the time and the skill. But to succeed in 2010 and the coming decade, what it really needs is a magician or two.

----------------------------------------------------------

Do you agree? Is Linux sexy enough for mainstream use, or does it still need more work? Perhaps a side issue is whether Linux needs to be sexy at all. Please post your views below for inclusion in our next podcast - don't forget to add a name other than Anonymous Penguin, and don't forget to provide some sort of explanation as to how you came about your answer. Pedants who happily answer that Linux is just a kernel might want to question whether they are indeed the "antisocial geeks" that Graham describes.


----------



## Cool G5 (Apr 16, 2010)

a_rahim said:


> ^No its not
> PDF



Link is dead.


----------



## Rahim (Apr 17, 2010)

*rlworkman.net/howtos/slackbasics.pdf


----------



## Rahim (Apr 24, 2010)

I hate computers: confessions of a sysadmin
Scott Merrill]

*www.crunchgear.com/wp-content/uploads/2010/04/office-space-copier.jpg​
I often wonder if plumbers reach a point in their career, after cleaning clogged drain after clogged drain, that they begin to hate plumbing. They hate pipes. They hate plumber’s putty. They hate all the tricks they’ve learned over the years, and they hate the need to have to learn tricks. It’s plumbing, for goodness sake: pipes fitting together and substances flowing through them. How complicated can it be?

I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life, and I love the escapism they sometimes afford; but I actually hate the computers themselves. Computers are fragile, unintuitive things — a hodge-podge of brittle, hardware and opaque, restrictive software. Why?

I provide computer support all day every day to “users”. I am not one of these snotty IT guys who looks with scorn and derision on people who don’t know what an IRQ is. I recognize that users don’t care about computers. The computer is a means to an end for them: a presentation to solicit more grant money, or a program to investigate a new computational method, or just simply sending a nice note to their family. They don’t want to “use the computer” so much as do something that the computer itself facilitates. I’m the same with with cars: I don’t want to know how an internal combustion engine works or know how to change my oil or in any other way become an automotive expert — I just want to drive to the grocery store!
But the damned computers get in the way of all the things the computers help us do. There’s this whole artificial paradigm about administrator accounts, and security, and permissions, and all other manner of things that people don’t care about. A host of ancillary software is required just to keep your computer running, but that software introduces more complexity and more points of failure, and ends up causing as much grief as it’s intended to resolve.

*Computer error messages are worthless​**www.crunchgear.com/wp-content/uploads/2010/04/windows_9x_bsod.png​
What sparked this current round of ire was a user’s inability to check for Windows Updates. Windows Update, the program, starts up just fine. But clicking on “Check for Updates” results in an unhelpful message that Windows Update could not check for updates. A meaningless error code is presented to the user, as if he’ll know what to do with that. There’s even a helpful link that says “Learn more about common Windows Update problems”. The list of suggested problems includes a variety of other meaningless error codes, but not the one that this user received. The Windows Event Log, which I know how to access but the user does not, contains nothing instructive. For a normal user, this would be a dead-end with one of two options: ignore the problem and hope nothing bad happens in consequence; or try to repair the operating system using some half-baked recovery method provided by the computer manufacturer or the Windows install disk (assuming they have one).

Another user I support has had nothing but trouble with Adobe Acrobat. Trying to open PDFs from within his browser fails spectacularly. Either the links simply never open, or they open a completely blank page, or Internet Explorer renders an error page suggesting that there’s a network problem! The user can right-click and “Save As” the links to get the PDFs, and I’m thankful that this user understand how to right-click at all, such that he has a viable workaround to the problem until I can find the root cause. But many, many users do not know what the right mouse button is for.

*www.crunchgear.com/wp-content/uploads/2010/04/aw_snap.gif​

 I pick on Microsoft a lot, because I think they do a lot of things fundamentally wrong. But plenty of other companies are just as guilty of bad design, bad implementation, and bad communication with their users. Google’s Chrome browser is cute when it says “Aw snap!”, but that leans the other way in terms of uselessness: it doesn’t give the user any better idea of what might be wrong, and users are left to feel helpless, powerless, and stupid.
Even when things go right, users are left to feel powerless and stupid. Installing almost any program on a Windows based system involves an inordinate number of clicks, all of them just saying “Okay” “Okay” “Okay”. No one reads the click-through EULAs, no one changes the default installation location, and no one selects specific installation options. They just keep clicking “Okay” because that’s what they’ve been trained to do. And then they end up with four extra toolbars in their browser and a bunch of “helper” programs that don’t actually help the user in any way and which they user doesn’t actually want. And they don’t know how to get rid of them.

*Computers don’t make sense​**www.crunchgear.com/wp-content/uploads/2010/04/uninstall-fail.png​ 
There’s an awful lot to be said about the simplicity and usefulness of installing software on Mac or Linux. In the latter case, you simply drag a file to your Applications folder, and you’re done. Linux package managers do all the heavy lifting without any user intervention. If a Linux program requires additional libraries, the package manager finds them and installs them automatically. In both instances, I can install new applications in a fraction of the time it takes to install something on Windows.
Removing software is another cause of much consternation for users. Again, Mac and Linux make it pretty easy most of the time. Heck, on any Linux system I can enumerate all of the packages installed in seconds with a single command from the package manager (or click of the appropriate button using a GUI for the package manager). But in any Windows machine — even a brand new one with top-of-the-line hardware — it requires long minutes to enumerate and display the installed software; and to make things worse the “Add and Remove Software” control panel item doesn’t actually show you all the installed applications. And removing any particular piece of software is not always a clean operation: cruft is left behind in the filesystem and the registry (don’t even get me started on my loathing of the Windows registry!).
Speaking of filesystems, why is it that a SQL database can find a specific record in a database of millions of records in a fraction of a second, but finding a specific file on your hard drive takes minutes? I’m sure there’s some very real reason why filesystems are so unfriendly to users, but I’ll be darned if I can explain it to any of my users.

*Computers are too complex to use​**www.crunchgear.com/wp-content/uploads/2010/04/arrrrgh.png​
Average folk might take a “computer class” which instructs them on a few specific tasks — usually application specific (How to use Microsoft Word), as opposed to task specific (How to use a word processing program) — but when experiences diverge from those presented in the class, the user is not well equipped to deal with the situation. How does one interpret this new error message? How does one deal with a recurring application fault?
The pace of change in the computer industry works against users. The whole color-coded ports initiative was a great step toward end user convenience, but that’s not enough when users now need to know the difference between VGA, DVI, and DisplayPort. A lot of the computers that are coming into my office have all three video ports, and the monitors support multiple inputs, leaving users to wonder which one(s) they should use when setting up their PC. I’ve had multiple calls from really smart graduate students who couldn’t figure out how to connect the computer to the monitor. Sure, it’s an easy joke to make fun of these situations, but it’s a damning indictment of the computer industry as a whole, if you ask me.
Like Nicholas, I’ve never had a malware infection on any computer I own; but I’ve helped lots of people — users I support professionally, and family and friends — recover from malware infections. Can you imagine your mother-in-law being able to find and follow these instructions for removing malware? Or worse, knowing about and responding to a botched antivirus update from your AV software?

*Computers fail spectacularly, taking all our data with them​**www.crunchgear.com/wp-content/uploads/2010/04/Sad_mac.png​
Hardware and software companies know that we use our computers to store information that is important to us. And yet backing up data to keep it safe is still a gigantic pain in the ass. Lots of “enterprise” backup software exists to try to protect us from computer failures (hardware, software, and user errors), and a host of “consumer” solutions vie for our consumer dollars; but frankly they all suck. Why do we need third-party software to protect the investment we’ve made in our computers? Users don’t buy backup software because they don’t expect their computers to fail.
It’s so easy to amass a huge amount of data today — digital photo archives, MP3 collections, and video — that it’s a real pain to reliably back up. Not only is it a pain, it’s expensive. You shell out a couple hundred bucks for a fancy new camera, and you’ll need to shell out a couple hundred more bucks to get an external hard drive onto which you can duplicate all your photos for safekeeping. And then, of course, it takes a long time to actually copy your data from your computer to your external hard drive, and you just don’t have the time or patience to commit to that regularly, so you start to neglect it and them *bam* your computer blows up — hard drive failure, malware infection, whatever — and you lose weeks and months worth of irreplaceable data.

Sure, some computers come with redundant disks, but most consumer-level RAID is a fragile mix of hardware and software, further complicating the setup. Why haven’t reliable, low-cost RAID solutions reached the mainstream yet? Why don’t end users have better access to useful things like snapshots, or ZFS yet?

And what about all the little failures that end users can’t possibly begin to detect or diagnose, like bulged capacitors on their mainboard, or a faulty video card, or wonky RAM?

*Computers are overwhelming​*
*www.crunchgear.com/wp-content/uploads/2010/04/austin-goodwill.jpg​ 
 The mind-numbing number of computers available for purchase at any retail establishment right now is enough to cow even the most stalwart bargain shopper. How is a layperson to proceed in the face of row after row of meaningless statistics? Will that extra 0.2 GHz make a demonstrable difference in their use of the computer? Will it give them an extra six months, or even a year, of useful life? Why should a normal user even care about the number of bits in their operating system? 
The Laptop Hunters tried to help people find the right laptop, but Sheila’s $2,000 HP isn’t necessarily the best pick of the available options, is it? Sure, AMD is simplifying its brand. But is that enough to really help people find the best product for their need? Will their branding refresh make any difference at all when there’s still five or ten seemingly identical systems on the shelf at the big box retail computer store?

*I hate computers.*

I know my little rant here is like shouting at the storm: there’s a huge, lethargic industry making gobs of cash on the complexity of the computer era, and there’s little capitalistic incentive to change the status quo. These complaints aren’t new. Many of them have been made for the past quarter century. We try, in our little way, to highlight some of the deficiencies we perceive in the industry as a whole, but that’s about all we can do from here. What are you doing about these problems?
Maybe I’ll become a plumber…

----------------------------------------

So, how many of hate computers? 

PS: This is not OSS-related article but what the heck!!!!!!!!!!


----------



## Rahim (Apr 25, 2010)

*Ubuntu, the family album* 
Source

A few days before the release of the new Ubuntu, here’s a guided tour through the Ubuntu family album with some annotations telling my story with the different versions.
*Ubuntu Warty WARTHOG (4.10)*


*blog.nizarus.org/wp-content/uploads/2010/04/Warty-warthog-ubuntu-4.10.jpg

  The  first version of the Ubuntu distribution. For  the history it was during a discussion between Robert Collins and Mark Shuttleworth that Mark has launched the name that will be adopted and adapted to any future versions of Ubuntu. Thus each version will have a code name composed of an adjective and an animal name.
*Ubuntu Hoary HEDGEHOG (5.04)*

 *blog.nizarus.org/wp-content/uploads/2010/04/hoary-hedgehog-ubuntu-5.04.jpeg
*Ubuntu Breezy BADGER (5.10)*

 *blog.nizarus.org/wp-content/uploads/2010/04/breezy-badget-ubuntu5.10.jpg
_March 2006 –  The first time I heard about Ubuntu:_ One of my students brings me a bag with two original CDs, an installation CD and a live CD, and he tells me he got the CD free and they are sent home by mail at no charge.  I take the live CD, I  go  home, but I can not find an equivalent to my very useful “Drakecenter” to configure the settings for my PC. I give up and I return to my sweet Mandriva.
*Ubuntu Dapper DRAKE (6.06)*

 *blog.nizarus.org/wp-content/uploads/2010/04/dapper-drake-ubuntu-6.06.jpg
 Many first time for this version. The first version with a long-term support.  The first version that lags behind the initial schedule of release (a release every six months). The first version that takes the name of a bird  All the following versions will follow in alphabetical order.
_November  2006 – The first time I install Ubuntu:_ I am in training in France, I was given a PC and allowed  to install my own OS. I take my Mandriva DVD and begin the installation, but can not complete the installation … the system does not detect the SATA disk of the PC. A colleague hands me a CD written on “Ubuntu 6.06″ and asks me to test it.  I install the system and presto, SATA disk detected and I find myself with my new GNOME desktop, all in brown and orange icons while. My colleague tells me that tomorrow will bring me the CD of the new version just released a few days ago and tells me that the new version brings a huge number of improvements.
*Ubuntu Edgy EFT (6.10)*

 *blog.nizarus.org/wp-content/uploads/2010/04/edgy-eft-ubuntu-6.10.jpg
_November  2006 (continued):_ My colleague brought me back the CD, and presto new installation within 24 hours. During the installation my colleague introduced me to a spirit of this new distribution, its history, its advantages over others, etc.  The installation is complete, the hard work starts: how to configure my machine and how to install my applications without my famous “Drakecenter”?  With an internet connection available, my first (good) reaction was to ask my best friend (at that time – I mentioned google) to help me and obviously I found the website of the Francophone community. There a very big surprise…  the documentation  is freely available,  forum registration is possible and it is free… (I say it was a surprise because at that time subscribing into the official forum of Mandriva was not free) long life to the free spirit, I am fascinated by this community and I joined it directly. Since that time I have not left my Ubuntu.
*Ubuntu Feisty FAWN (7.04)*

 *blog.nizarus.org/wp-content/uploads/2010/04/feisty-fawn-ubuntu-7.04.jpg
_April  2007 – Community:_ I’m getting my bearings in this new distribution after years spent with Mandrake and Mandriva, and my “Drakecenter” (yes I repeat: p) don’t miss me more.  Thanks to the forum and the documentation of the Francophone community all my questions (or almost) are answered. I discovered the concept of community and of course my first question was is are there a Tunisian community? Yes there is one, and it is two months old, but with no apparent activity (for now).
*Ubuntu Gutsy GIBBON (7.10)*

 *blog.nizarus.org/wp-content/uploads/2010/04/gutsy-gibbon-ubuntu-7.10.jpg
_October 2007 – The  community (bis):_ More and more e-mails circulating on the mailing list of our community and IRC meetings are held from time to time.  Few days after the release of Ubuntu 7.10, ubuntu-tn  community participates in the biggest FOSS event in Tunisia “Software Freedom Day – SFD”. Our community is within the landscape of FOSS communities in Tunisia and became one of the main actors.
*Ubuntu Hardy HERON (8.04)*

 *blog.nizarus.org/wp-content/uploads/2010/04/hardy-heron-ubuntu-8.04.jpg
 The second version with a long term support, as the first LTS version the code name uses a name of a bird. For me it is the version that has the most beautiful wallpaper by default *blog.nizarus.org/2010/wp-includes/images/smilies/icon_smile.gif but not only that.
_April 2008 – Purpose, approval of the community:_ Our community turns on more and more and we set a common goal: become an approved local community.  The goal was reached July 22, 2008.
*Ubuntu Intrepid IBEX (8.10)*

 *blog.nizarus.org/wp-content/uploads/2010/04/intrepid-ibex-ubuntu-8.10.jpg
*Ubuntu Jaunty JACKALOPE (9.04)*

 *blog.nizarus.org/wp-content/uploads/2010/04/jaunty-jackalope-ubuntu-9.04.jpg
 The  Jackalope, an imaginary animal, had to find it *blog.nizarus.org/wp-includes/images/smilies/icon_smile.gif 
*Ubuntu Karmic KOALA (9.10)*

 *blog.nizarus.org/wp-content/uploads/2010/04/karmic-koala-ubuntu-9.10.jpg
 Ubuntu in the  clouds.
 and  the story history will continue with:
*Ubuntu Lucid LYNX (10.04)*

 *blog.nizarus.org/wp-content/uploads/2010/04/lucid-lynx-ubuntu-10.04.jpg
*U*​


----------



## Rahim (May 3, 2010)

*The Hobbyists OS*
​ 
Submitted by ThistleWeb on Tue, 04/27/2010 - 23:51        
​
     Microsoft's army of apologists like to spread the word that Linux is a "hobbyists OS", so this post is a look at what that means and why it's a label more suited to Windows. The attack is meant to draw attention to the fact that anyone can write code which appears in Linux, inferring the quality of the code is dubious. Basically, it can't be good quality if people outside the corporation write it.


 They try to paint the picture that while Windows just works, Linux needs a lot of tinkering to get anything done. It's pitched at both home users and businesses. For the home user it's about "you have to learn all this stuff, and spend hours fixing it" while the businesses get the "you have your staff PC's down for X hours so they can't be productive, while also spending extra wages on skilled IT people to fix and configure things." The implication is that "Windows is a better investment in man hours, productivity and cost, Linux costs you money."


 How many man hours do you have to spend on Windows doing virus scans, spyware scans etc? How many man hours do you have to spend Googling to find how to remove infection because your chosen protection tools can detect it but can't remove it? How can you assure your customers that their data is not compromised by some spyware your tools can't detect? How can you be assured that the site supposedly giving a solution to a particular virus is not itself a phishing scam waiting to sell you some software if you put your credit card details in or a script laden site ready to dump a whole new payload of malware on your plate?


 The best (from the malware writer's point of view, worst from your's) malware are the ones that sit undetected for a long time, giving their controllers as much information as possible before they're detected and removed. Just like any spy, if it's cover is blown it's no longer of use. Your protection programs can only detect what they know about, and someone has to be first to get hit. You only hear about the big ones, while the little ones are written to NOT draw any attention to themselves. This means you'll never know if your network is clean or not, only what your software reports. Who knows how many spyware programs are in the wild and have not yet been detected by the companies selling the protection software because they've been well written to avoid detection. You'll never know who else has access to your private data, which includes Microsoft themselves.


 How much overtime have you had to pay out for your IT department to try and hunt down yet another malware infection running wild on your network? How much do you have to pay for the same problems to be fixed over and over and over again? What do your employees do when their PC's are down? They still get paid to twiddle their thumbs right? Do they get paid overtime for staying behind to catch up? If not, how does that reflect on their feelings towards you as an employer? They didn't choose Windows. You did.


 If your hobby is detecting and removing malware, then Windows is a great hobbyists OS. Linux and OSX don't get a look in on that score.


 While Linux is behind the curve in a few very specific areas, as far as malware downtime is concerned, it's all but immune. While the Windows user in the next seat can't get anything done because it's being scanned, or trying to clean some infection, or just has to reboot to finish installing a patch, the Linux user in the next seat continues working without a hitch.


 To me, it seems pretty obvious which OS is built to be used for it's intended purpose, and which one you have to spend a lot of man hours just keeping functional. Remember, a PC is a tool which enables you to do other things, the more you have to spend keeping it clean, the less you can devote to it's intended function. It's not just man hours, it's also in products which always claim to solve a problem; for a price. How much is your time worth? Bosses have an instinctive understanding of this concept.


 So when someone tries to spin you the "Linux is just for hobbyists" line, ask how much they factor malware into the comparison. It adds significantly to the other Microsoft written line often used by Microsoft apologists, the TCO (Total Cost of Ownership).


 Anything with a licence cost has a higher start on that list, dealing with the constant fight against malware is NEVER mentioned as a cost, yet it does cost in real terms. Training is a handy one, when Microsoft change their applications, then all but force people to upgrade by stopping support of older versions, they force people to have to retrain anyway; so retraining in Linux is much the same. When you make an OS with as many exploits built in, the idea of making money from support services is a joke.


 There's a whole industry built around fixing the holes Microsoft can't seem to fix; think about it. Does this sound like the work of a bunch of professional programmers? The fact that Windows admins have to lock down the systems they're in charge of, cutting off most of the useful functions of a PC just to save them some work cleaning infections speaks volumes. USB ports are a great thing for data portability, are they blocked on your network in case someone puts an infected USB thumbdrive in a PC and brings the network down because Windows wants to auto run everything blindly?


 Regular home users generally don't have that level of knowledge. They have little choice but to brave the wild west, and expect to buy a new PC every couple of years because their current one is so badly infected with malware that it's barely able to boot up. Which of course means more money for Microsoft; which need never have been spent. To make matters even worse, Microsoft have a whole lobbying / bullying machine dedicated to ensuring customers have no choice but to buy a PC with Windows already installed.


 For businesses this means you have to splash out on new PCs every few years under the assumption that you need the latest specification hardware. The difference in performance between the old and new is startling of course, because the old one is clogged down with malware, the new one is not; yet. Can you afford to throw away a large chunk of your capital when you don't need to? What benefit do you get from the newer versions? Are there some features not available in the older versions that your business needs? I doubt it. Why should you be forced to upgrade when it only suits Microsoft?


 In short, if you want to spend time and money always fixing your PC, then Windows is the way to go, the rest of us can use Linux and actually use our PCs. Remember that Linux is free, both in terms of what you can do with it, and (the context of this post) cost. If you install Linux on 1 PC, it's the same as installing it on 1,000,000 PCs. Windows have volume licensing but it still costs money per PC. This adds up very quickly. Add the same deal for Microsoft Office and you're wasting a fortune before you even get onto the training, support and malware issues.


----------



## D4RK8L1TZ (May 3, 2010)

well yes    iv seen people who think that computer n windows is one n da same.......their paradigm of seeing at computers is windows wich i think needs to change fast


----------



## Rahim (May 27, 2010)

*Why GNU+Linux is > GNU/Linux and > just Linux* 

Let's get the obvious out of the way, we advocate for computer software  users' freedom. The Free Software Foundation and GNU Project are responsible  for most of the software we use everyday. At InaTux Computers, we of course  use the GNU Compiler Collection (GCC) to compile custom Linux kernels  (when customers want them), as well as other custom software modifications  customers may want (such as hardware optimization). Our compiling is  of course done in GNOME Terminal running GNU BASH, and most software  we compile require the GNU C Library, GTK+, gtkmm, etc.

 GNU is most of the software we use, without it we wouldn't have a working  operating system (unless you define an operating system as just the kernel,  but just a kernel is useless), or we would be using a proprietary  operating system as that would sadly be the norm-- but we would also  not be the business we are if not for Free Software. So with that,  we naturally don't think it is justified to call the operating system  "Linux" (unless the operating system is not using GNU, in which case  it might be justified).

 We give credit where credit is due, Linux and GNU (ha, sort of rhymes).  So just "Linux" is not acceptable, and I will now explain why we  [now] choose to use GNU+Linux instead of GNU/Linux.

 Let's look at it mathematically, "GNU / Linux" means GNU divided by Linux.  How does that make any sense? At best it implies Linux is dividing GNU,  making it harder to work and communicate with contributors; which is simply  not true. Linux was what united the GNU operating system, and gave us the  functioning system we have today. So GNU divided by Linux makes no sense.

 While "GNU + Linux" makes perfect mathematical sense, the GNU operating  system plus the Linux kernel. This should actually be more like this  "GNU - Hurd/Mach + Linux = GNU+Linux" it makes sense.

 Let's look at it linguistically, to say "GNU slash Linux" isn't the  norm, typically when a word uses a slash the slash is not pronounced,  example "and/or" you would never say "and slash or" because it means  to be something or the other at your choice; Bacon and/or Eggs.

 So to do the same with "GNU/Linux" is to have a choice, GNU or Linux,  when choosing only one either way results in a nonfunctional operating  system. Just GNU results in nonfunctional system tools and an incomplete  micro-kernel, and just Linux results in a fully-functional monolithic  kernel without any system tools, not even a shell to perform simple tasks.

 With "GNU + Linux" you would say "GNU plus Linux" it's a little  shorter and doesn't sound as strange, as there are many things that use  "Something plus something". But it's not much better in terms of sounding  "hip" and "cool" or even simple, but we see it as a small price to pay  to give credit to those who made it possible to boot and run this powerful Free  Software operating system. Richard Stallman's GNU Project and Linus Torvalds.

 Now let's look at it in terms of visibility, "GNU/Linux" looks like one  word (like it's GNU|Linux or GNULinux), it doesn't separate the two, where  as "GNU+Linux" has a nice bit of space between GNU and Linux. Let's  compare with highlighting them both.

GNU/Linux GNU/Linux GNU/Linux    GNU/Linux

 With "GNU/Linux" there is only a small bit of space between the yellow, proving  the fact that they will seem as one word to those unfamiliar to the "GNU/Linux"  terminology, especially at smaller font sizes.

GNU+Linux GNU+Linux GNU+Linux    GNU+Linux

 With "GNU+Linux" there is a noticeably larger space between the two, even at smaller  font sizes. Being a website it is important that readers that are quickly reading through  see the two separated, so if the reader is only familiar with "Linux" the reader  will easily see that Linux is involved, instead of quickly reading through only seeing  the "GNU/Linux" blob and not recognizing the "Linux" part of the word. Separating  them better gives credit to both GNU and Linux, because they are both better  recognized within sentences.

 That is why we now use the term GNU+Linux instead of GNU/Linux, and we ask people  to consider doing the same if you already use the term GNU/Linux. Feel free to  discuss this in the comments.

Source​


----------



## ico (May 28, 2010)

Simply call it Linux. No use of over-complicating the things.


----------



## Rahim (May 28, 2010)

Not even mentioning the word "Linux" should be the way ahead. Remember invisible Linux? Users should be surprised on their own.


----------



## celldweller1591 (Jun 3, 2010)

*Why Open Source Makes Sense: Scientifically Proven*​
Check out this video below. Its basically an animation about an MIT social experiment, where sociologist found a bizarre pattern when it came to work and incentives. When the task at hand was a mundane and a repetitive task, money was found as perfect incentive. However, when the task required “rudimentary cognitive” skills, money, it turns out, wasn’t the best incentive. This makes perfect sense when we look at the amazing open source projects out there. From Linux to Wikipedia to Open Street Map, all these project tap into this basic human behavior.

See video here : Youtube Video (great Video 

Google has its 20% rule, which fostered many of the companies top products. Apple is a one man show, this video can explain the company’s success to some extent. While Microsoft has grown into a huge behemoth, its no wonder we don’t see much innovation from them lately.


----------



## Rahim (Jun 5, 2010)

*Open Sourcing Politics*


 June 02, 2010
Posted by: *Glyn Moody* ShareThis  ​ 

 “Linux is subversive”: so begins “The Cathedral and the Bazaar,” Eric Raymond's analysis of the open source way.  The subversion there was mainly applied to the world of software, but how much more subversive are the ideas that lie behind open source when applied to politics. 

 That is precisely what the increasingly-important open government movement aims to do, an area I've been covering in this blog partly because of its close kinship with open source, but also because of the major implications it has for the use of open source – not least because open government tends to promote its deployment.  But what exactly is open government, and how does it flow from open source?   

 Key characteristics of the latter are its open, modular code that allows Net-based collaboration driven by users' needs.   

 In the sphere of open government, I think the open, modular code maps onto open public data, available in open formats, through open APIs.  It's in this area where most of the open government initiatives have been announced, usually under the slightly more generic banner of “transparency”.  Here's the latest UK move: 

_*Central government spending transparency* _
_Historic COINS spending data to be published online in June 2010. _
_All new central government ICT contracts to be published online from July 2010. _
_All new central government lender documents for contracts over £10,000 to be published on a single website from September 2010, with this information to be made available to the public free of charge. _

_New items of central government spending over £25,000 to be published online from November 2010. _

_All new central government contracts to be published in full from January 2011. _
_Full information on all DFID international development projects over £500 to be published online from January 2011, including financial information and project documentation. _

_*Local government spending transparency* _
_New items of local government spending over £500 to be published on a council-by-council basis from January 2011. _
_New local government contracts and tender documents for expenditure over £500 to be published in full from January 2011. _

_*Other key government datasets* _
_Crime data to be published at a level that allows the public to see what is happening on their streets from January 2011. _

_Names, grades, job titles and annual pay rates for most Senior Civil Servants with salaries above £150,000 to be published in June 2010. _
_Names, grades, job titles and annual pay rates for most Senior Civil Servants and NDPB officials with salaries higher than the lowest permissible in Pay Band 1 of the Senior Civil Service pay scale to be published from September 2010. _
_Organograms for central government departments and agencies that include all staff positions to be published in a common format from October 2010._ 
   What's interesting here is the highly political nature of these moves: this turning inside-out of the governmental machine represents a sea-change from the previous UK administration, with its love of massive, centralised databases that allowed the centre to exercise power over the rest of us. 

Radical though this may be, opening up data in this way is relatively straightforward (not the same as easy); the second part of the open source equation -  Net-based collaboration driven by users' needs – is much harder to achieve.  That's not because the technology isn't there – obviously the Internet is pretty pervasive in Western societies (although not ubiquitous, which is important to remember.)  It's also clear that to promote collaboration you need to release data under open licences, and government-owned code under open source ones.  But perhaps more fundamental than either of these is the need for an open architecture – not so much in terms of computing, as in terms of culture. 

As the word suggests, “government” is about governing, and for millennia this has often translated into telling people what to do.  The democratic process may mean that the latter get to choose (at least nominally) who gives the orders, but generally only at election time.  This is not, obviously, how open source works: there, the whole project is guided by and responding to the needs of the “electorate” - the users – at all times.  For truly open government, we need to enable something similar to occur. 

Fortunately, this does not require a massive, all-or-nothing project: it can be achieved through myriad baby steps, each one of which helps engage the electorate on an ongoing, even daily basis.  That's not easy, given the traditional cynicism that many people now have towards the democratic process; and it's made harder by the existing culture of command and control – what in software is the “proprietary” way – that has encouraged politicians and civil servants to cling on to power and to hide the inner workings of government as much as possible.    If you want to explore these ideas in more depth, I strongly recommend reading  the Centre for Technology Policy's new report “Open Government: Some Next Steps for the UK”.  It is quite simply the best analysis I have come across of what open government means, how much has been achieved so far and how we can realise its full potential through practical actions.  Everyone interested in open government should read it: it's pretty subversive stuff – and that's good.​


----------



## ico (Oct 16, 2010)

bump!!!!!!


----------



## Rahim (Oct 16, 2010)

/me sigh of relief .....


----------



## Liverpool_fan (Oct 16, 2010)

^ lol

Fact or Fiction? Top 8 Linux Myths Debunked - PCWorld Business Center

Fact or Fiction? Top 8 Linux Myths Debunked

If the idea of using Linux in your business is one that makes you nervous, chances are you've fallen prey to one or more of the many myths out there that are frequently disseminated by competing vendors such as Microsoft. After all, each Linux user means one less sale for such companies, so they have a powerful motivation to spread such FUD.

In fact, the ranks of businesses and government organizations using Linux grows every day, and for good reason: it's simply a good business choice. Let's take a look, then, at some of the top anxiety-causing myths and dispel them once and for all.

1. "It's Hard to Install"

Today, installing Linux is actually easier than installing Windows. Of course, most people don't install Windows themselves--rather, it comes preinstalled on their hardware, and that's an option with Linux too, if you're in the market for a new machine anyway.

If not, however, the best thing to do is first try out the distribution you're interested in via a Live CD or Live USB. Then, once you decide you like it, you can either install it in dual-boot fashion, so that both Linux and Windows are available to you all the time, or you can install Linux instead of Windows.

Either way, installation has become extremely simple over the years, particularly on distributions such as Ubuntu, Fedora, Linux Mint and openSUSE. Most include a step-by-step wizard and very easy-to-understand graphical tools; they also typically offer a way to automate the process. A full installation will probably take no more than 30 minutes, including basic apps.

2. "It's Just for Experts"

That Linux is more difficult to use than Windows and Macs is probably one of the most enduring and yet unjustified myths in existence today. It certainly used to be true--say, 10 years ago. Today, however, the inclusion of attractive graphical user interfaces and other usability improvements in many distributions means that even elementary school children can use Linux easily.

Now, server usage is a different story--just as it is under Windows, for example. And Linux won't be exactly the same as a Mac or Windows. But on the desktop, if you're used to the GUI of Windows or Mac OS X, you should have no trouble getting used to Linux. It's that simple.

3. "It's Free, So It Must Be Pirated"

Despite the growing use of free and open source software in governments and other organizations, some people still believe that any software that's free must be illegally copied. Fortunately, that's completely false. The notion of "taking" software off the Internet and then "tampering with it" for your own ends can strike litigation fears into the hearts of those unfamiliar with the concept, but fear not! Free and open source software is designed from the start to be free in cost as well as open to modification and improvement. That's how it works and gets better.

4. "There's No Support"

Vendors of proprietary software love to strike more fear into business users' hearts by painting a picture of the Linux user alone at sea, without anyone to ask for help. Once again, completely false!

First of all, every Linux distribution has an online community with excellent forums for getting help. There are also forums dedicated to small businesses and for newcomers in need of extra explanation. For those who want even more assurance, commercial Linux versions such as Red Hat Enterprise Linux and SUSE Linux Enterprise Desktop come with vendor support. It's entirely up to you which route to choose.

5. "It's Not Compatible"

There are very few instances of hardware and software remaining that can't be used with Linux. One of the operating system's many advantages, in fact, is that it's designed not to hog resources, and so doesn't require the latest, cutting-edge hardware. Most peripherals are compatible as well, particularly in distributions such as Ubuntu.

On the application side, it's also rare to find a problem. If there is something your business needs that can be run only on Windows, however, there are packages like Wine and Crossover Linux to make that happen. There are also countless comparable and Linux-friendly alternatives that can be easily installed, including all basic productivity packages.

6. "It's Less Secure"

Of all the myths perpetuated about Linux, I'd say this is the one with the least merit. The reality, in fact, is quite the reverse: Linux is far *more* secure than either Windows or Macs, as countless examples and security researchers such as Secunia have confirmed. In a nutshell, Linux's superior security derives from the way privileges are assigned, the fact that it's open to scrutiny by countless developers the world over, and the diversity of distributions in use.

Ever wonder why you've never heard of the Linux equivalent of Microsoft's "Patch Tuesday"? That's because there isn't one--it's not necessary. Neither is antivirus software. Strange but true.

7. "It's Not Reliable"

If you're using a Mac or Windows, it goes without saying that you are intimately familiar with crashes and downtime. Part of that is due to those systems' vulnerability to malware, but part is also simply inherent in the software. That's a big reason why Linux is used so heavily on servers--it almost never goes down. Imagine a day in the life of your business with no downtime!

8. "Its TCO Is Higher"

Last, but not least, proprietary vendors are notorious for trying to counter Linux's free price tag with vague fears about its "higher" total cost of operation in the long run. All I can say is, if that were true, why are so many governments and organizations around the globe turning to it in droves, particularly during the tough economic times we've had over the past few years?

There are also numerous studies confirming the financial benefits of Linux in a business setting, even with paid support. It's worth noting, too, that TCO doesn't explicitly capture the future costs that will be incurred by being locked in with a particular vendor.

Is Linux perfect? Of course not; no operating system is. Nor is it necessarily the best choice for every business. But don't let the myths hold you back.


----------



## Rahim (Oct 16, 2010)

^Comprehensively debunked and thanks for making this a sticky.


----------



## baccilus (Oct 17, 2010)

This is such a great thread. I am going to go through all of it. Can't figure out how to rep you .


----------



## Rahim (Oct 17, 2010)

^Thanks for appreciating the hard work (blatant copy/paste) of us


----------



## Rahim (Oct 17, 2010)

6 Linux Distros That Changed Everything​​
By Kale

*geektrio.net/wp-content/themes/arras-theme/library/timthumb.php?src=*geektrio.net/wp-content/uploads/2010/10/boot.png&w=630&h=250&zc=1​
Linux is all around us. From phones to firewalls, from Macs to PCs, it’s getting hard to find electronics that don’t run Linux. Over the years, there have been many distributions (normally called distros) of Linux. Some are full-featured, others are very small, some are general purpose, and others are designed for specific tasks. Love it or hate it, Linux is here to stay.

Below is a list of 6 distros that were milestones for Linux adoption. Enjoy.

*1. Debian (1993-present):* Back in 1993, only hardcore IT geeks knew what Linux was or where to find it. Released alongside distros like Slackware and SuSE, Debian introduced a new concept… a universal OS that YOU customize. The user would install the Debian core OS and then have access to thousands of repositories containing software installation files. Debian is still available today, in fact… there are many other popular distributions which are based on it.
*geektrio.net/wp-content/uploads/2010/10/debian.png​


*2. Red Hat (1995-2004):* Nowadays, the term Red Hat refers to one of two distributions, Fedora or RHEL.Years ago however, the distribution was simply called Red Hat. This was the first distro to have massive adoption on both the enterprise and hobbyist fronts. Marking a major milestone, Red Hat has become almost as iconic as Linux itself. Even Windows System Administrators have heard of it… and most of them have probably used it. Red Hat is everywhere!

*geektrio.net/wp-content/uploads/2010/10/redhat.jpg​


*3. Yellow Dog (1999-present):* As Red Hat gained popularity, more and more people started to notice Linux. There were several distros available, but almost all were written for X86 architecture, leaving Mac users out in the cold. Along came Yellow Dog (YDL), and PowerPC users could finally get a taste of the Linux experience. Ironically… just a few years later, Apple released Free-BSD based Mac OS X, and YDL became scarce. Still, it was an important milestone.

*geektrio.net/wp-content/uploads/2010/10/ydl1.jpg​


*4. SmoothWall GPL (2000-2002):* From its inception, Linux was praised for its security features. It wasn’t long until people started using Linux as a firewall/router. SmoothWall GPL was not the only Linux-based firewall distro, but it was certainly one of the most popular. It introduced many people to the idea of using a Linux server as a network appliance, another important milestone. SmoothWall GPL was sunsetted in 2002, but its successor, SmoothWall Express, is still distributed worldwide.


*geektrio.net/wp-content/uploads/2010/10/smoothwall.png​
*5. Ubuntu (2004-present):* During the late 90’s and early 2000’s, Linux became widely adopted by server administrators and uber geeks. Linux was seen as a capable server OS, but had much slower adoption as a desktop operating system. Distros like Linux Mandrake (now called Mandriva) were marketed as “Linux for everyday people”, but it was Ubuntu that really brought user-friendly Linux to the desktop. Ubuntu proved Linux was a viable option to Windows.

*geektrio.net/wp-content/uploads/2010/10/ubuntu.png​
6*. Android (2008-present):* In 2005, Google purchased a start-up called Android. The firm was busy creating a mobile OS based on Linux to battle the ever-popular Symbian OS used on most phones. At the time, nobody had heard of Android or knew what it was capable of. Today, Android has become the most popular mobile OS in circulation. It is offered on phones by every major carrier, and has even had limited netbook adoption. It has become the Linux distro of the mobile age.

*geektrio.net/wp-content/uploads/2010/10/android.png​
----------------------------------------------------------------------

Fellow members can add further distros which has major influence in FOSS.


----------



## Rahim (Oct 26, 2010)

5 Myths About OpenOffice.org / LibreOffice
Bruce Byfield

*www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/node-page/nodeimage/story/openoffice_1.png​
Most free software accumulates myths. Most people only know about it second hand (if at all), but few are slowed by the fact that they don't know what they are talking about.

As a large desktop application that is also cross-platform, OpenOffice.org (or should I say LibreOffice?) seems to have attracted more myths than most. Here are the top five that I have kept stumbling across in eight years of advocacy:

*OpenOffice.org Can't Be Any Good Because It's Free​*
Most free software has faced this myth at one time or other. And, to be honest, sometimes it's true, in that some free software compares unfavorably with its proprietary counterparts.

But in OpenOffice.org's case, the myth is far too sweeping.

In the main office applications, the only place where OpenOffice.org lags behind MSO is in the presentation software; Impress remains less able to handle should than PowerPoint. Other software does not come bundled with OpenOffice.org, but often you can download free software to make up the difference -- for instance, you can use Mozilla Thunderbird rather than Outlook.

Overall, in almost every instance where you would use MSO for professional purposes, you can easily substitute OpenOffice.org. I know, because -- unlike most of OpenOffice.org's detractors -- I've used it professionally, even when I was a lone user interacting with an office full of MSO users. Once I learned the software, I never had any difficulties.

*OpenOffice.org Is Immature Code*​
"I'd like to use OpenOffice.org," I often hear, "But I need software I can rely on, so I have to stick with with Microsoft Office."

To anyone like me, who can quote chapter and verse about the instability of MSO, or point out what has been broken for over a decade in it, this comment makes me burst out in a fit of giggles. And this reaction isn't anti-Windows or anti-proprietary prejudice; the information is widely known among power users. If I used Windows or proprietary software, I wouldn't be using MSO.

But, my initial reaction aside, this rationale irks me, because the idea that OpenOffice.org code is new simply isn't true. StarDivision, the office suite that is OpenOffice.org's ultimate answer, released its first component -- the word processor -- twenty-five years ago. Within another four years, the word processor had been joined by the rest of the suite.

Almost certainly, none of this original code remains in current versions. But, if anything, OpenOffice.org's coding challenges are exactly the opposite of what most people assume. Its problems are not adding features, but dealing with legacy code while adding new features and trying to minimize code bloat.

*OpenOffice.org Is Just a Microsoft Office Clone*​
This charge seems part of a double-bind. If OpenOffice.org does not offer features comparable to MSO, or include features that MSO can easily import, then it cannot offer an alternative. MSO is, after all, the world's most popular office suite. Yet, when OpenOffice.org tries to retain compatibility, it is dismissed as a clone. Whichever path of development it chooses, OpenOffice.org can't win.

At any rate, the myth just isn't true. Although always concerned with MSO compatibility, OpenOffice.org has never simply imitated MSO. A handful of its spreadsheet functions have no equivalent in Excel. Nor has OpenOffice.org succumbed to replacing menus and toolbars with a ribbon interface like the one that MSO users are still complaining about several years after it was introduced.

Even more importantly, advanced use of OpenOffice.org depends on the use of styles to a degree that MSO does not. That is especially so in Write, which has five different types of styles where MS Word has only two, but is true of all OpenOffice.org's applications. By contrast, MSO seems to favor manual formatting over styles. For experts especially, OpenOffice.org is the office suite of choice.

*OpenOffice.org Lacks Certain Features*​
Occasionally, this accusation may be true -- but not so often that I can remember a particular instance. Almost inevitably, when someone asserts this claim, it means that they have not spent enough time familiarizing themselves with the interface. They haven't noticed that the feature is in a different menu, or goes by a different name. Sometimes, the allegedly missing feature is one that is not enabled by default, but is one that you can quickly add by creating a macro or customized keyboard shortcut.

I also have to add that the same people who make this claim never seem to know OpenOffice.org well enough to mention the fact that there are some features -- such as page styles or a completely customizable table of contents -- that OpenOffice.org can boast but that MSO completely lacks.

*OpenOffice.org Is a Second Choice​*
Mainstream reviews often start with the assumption that OpenOffice.org is a poor choice compared to MS Office -- that nobody would use it if they could afford to spend money on software.

This assumption ignores the philosophical and political concepts of freedom that makes OpenOffice.org the preferred alternative for some of us.

But, as an analysis, it is incomplete. If you take the time to learn how to use OpenOffice.org, then you quickly find that, in general, it compares very favorably. To be exact, I would say that OpenOffice.org's Impress is inferior to PowerPoint, largely because of its limited capacity to coordinate sound in presentations, while the spreadsheet Calc is roughly equal to Excel in features, capacity, and stability.

However, it is in word processing that OpenOffice.org really outperforms MSO. OpenOffice.org's Writer is as much an intermediate desktop publisher as a word processor, and (as I know from personal experience) can handle 700 page documents full of graphics while MS Word chokes on anything more than 30 pages unless you take extraordinary precautions -- and, even then, you better have regular backups in case of corruption. By contrast, OpenOffice.org is a plausible substitute for FrameMaker -- and you don't get more sophisticated in word processors than that.

Admittedly, OpenOffice.org does not come with some of the extras that MSO includes. But, browsing through the repositories, you can usually find equivalents, starting with Mozilla Thunderbird as a replacement for Outlook.

In short, in some ways it's true that OpenOffice.org does not compare with MSO. But in just as many ways, it's as good or better.

*Assigning the Blame​*
Probably the most irritating aspect of such myths is that they have dogged OpenOffice.org from the first. Yet even in the 1.0 release, first made eight years ago, I could have debunked them in much the same terms as I've done here. The main difference that the intervening years have made is that my answers have become even truer than they were eight years ago.

I suspect that most of these myths are not reasons for avoiding OpenOffice.org, but excuses for laziness. When you have to pay for your software, you are more cautious about changing it than when you can download two or three alternatives in a matter of moments without paying anything. Too often, the perpetrators of these myths are laying the blame on the software when they should actually be blaming their own fear of change instead.

Despite such myths, OpenOffice.org remains a valid alternative for almost everyone -- and whatever Oracle or LibreOffice chooses to do, that is going to remain at least as true in the future as it is now.

*[*Over the last six years, I have covered most aspects of OpenOffice.org for Linux Journal. In fact, several people have told me that they have arranged my columns to create their own manual. However, while I could squeeze out a few more articles by going into detail about the functions in Calc, I've rapidly running out of ideas for new columns.

I will probably return to OpenOffice.org from time to time, but, starting next month, I'll be writing introductory articles to other major desktop applications instead.*]*


----------



## Rahim (Dec 30, 2010)

*How You Know When It’s Time to Switch to Linux*
Katherine Noyes​
If you're considering giving Windows the boot here are 10 signs the time is right to give Linux a try.

It's easy to be content with your computer installation as long as it keeps doing what you want it to without too much trouble. When frequent problems arise, however, it's hard to remain faithful for long.

The majority of the computing world "grows up" on Windows, of course, since Microsoft's operating system still holds by far the largest share of the market. Not everyone stays there, however.



Growing numbers, in fact, are switching to Linux every day, and for good reasons. How do you know when it's time to switch to Linux? Here are just a few (mostly) serious signs.


*1. You're Tired of Paying for Software *

You wake up one day and realize you're tired of paying for an operating system that's more bogged down with bugs than most alpha builds are. What, exactly, are you paying for here? Then, of course, there's also all the antivirus software you have to buy to keep it running. With Linux, on the other hand, countless developers around the world are working around the clock to keep the 100 percent free operating system at the head of its class.

*2. You're Tired of Upgrading Hardware *


If you find yourself upgrading perfectly good hardware just because resource-hungry Windows demands it, you might be using the wrong operating system.

*3. You're Tired of Malware *



Your older hardware probably still is fundamentally pretty good; too bad there's all that malware dragging it down. Thanks for sharing that love, Windows! Note to Microsoft: a stronger permissions system would have been a lot better, just FYI.

*4. You've Seen One Too Many Patch Tuesdays *

You've experienced your share of Patch Tuesday repair efforts, and they aren't getting any more fun. In fact, they're getting worse. It wouldn't be so bad if you didn't know how long the bugs had been there, flapping in the breeze, before they finally got fixed.

*5. You Don't Have the Time *

Who among us doesn't enjoy spending hours at a time scanning for viruses and spyware and defragmenting? Well, probably all of us don't enjoy that, actually. Then, too, there's all that unplanned downtime. Don't we have other things to do?

*6. You Like Speed* 

If Windows' boot speed were faster, when would you make your coffee? Right. Sadly, that argument doesn't quite cut it anymore.

*7. You Like Sharing* 

Your business associate in Berlin tried to send you an .ODP file--based on the international standard file format--but PowerPoint wouldn't read it properly. So much for interoperability.


*8. You Don't Actually Love Internet Explorer* 

It's no accident Internet Explorer's market share is slipping, and vulnerabilities are a big part of it. Then, too, there's the monoculture effect making it all worse.

*9. You Want to Be in Control* 

It's no longer fun waiting to see when Microsoft will fix bugs, or what new features it will come out with. You're ready to start driving changes like that yourself.

*10. You're One of a Kind* 

Though it can be altered in very small, superficial ways, Windows can't hold a candle to Linux when it comes to customizability. Are you just another face in the crowd? Of course not, and Linux recognizes that.

Is Linux perfect? Certainly not. But it is a lot better than Windows in so many ways. Isn't it time for you to finally make the switch?


----------



## gk2k (Dec 30, 2010)

Thanks for the awesome collection of articles. Though I visit the forum since 2 yrs. I had not viewed this thread.


----------



## Rahim (Dec 30, 2010)

^Thanks


----------



## abhijangda (Dec 31, 2010)

really a good one Rahim
thx for sharing!!


----------



## Rahim (Jan 10, 2011)

*Apple, Linux welcomes you to 1998!​​*​

Joe Brockmeier

A lot of people are buzzing about Apple's Mac App Store, but I'm nonplussed. I've had the same features on Linux since the late 90's.

Granted, I'm being a little snarky — but only a little. Apple's App Store for the iPhone was a big deal because, before Apple, the application landscape for mobile phones was not that rosy. Apple simplified getting applications on the phone without having to deal directly with the carriers — so some credit is due there. They've also raised the bar in terms of what developers are shooting for for mobile devices, so kudos to Apple for that.

*But the buzz over the Apple Mac App Store? Meh. Look at the features that Apple touts:*

Install any app with ease
Keep your apps up to date
The app you need. When you need it
Buy, download, and even redownload

*Linux folks, sound familiar?* We've had all of this, modulo "buy", for a decade at least. The Advanced Package Tool, a.k.a. "APT" for Debian-based systems (that includes Ubuntu), has made all of this possible for years and years. Granted, this has primarily focused on free and open source software, but paid apps are possible too. The Ubuntu folks have had a paid software store since Ubuntu 10.10. (It is, I admit, sparsely populated when it comes to proprietary/paid software.)

But the installation, updating, and such? All very possible with APT — or Yum or Zipper, if you happen to be using an RPM-based distro. (Or APT for RPM, if that's still being maintained.)

Apple brags about having more than 1,000 apps available at launch... Ubuntu users can find 32,000-plus packages in the software repository for Ubuntu 10.10. Now, a bunch of those packages are not end-user applications — this includes things like libraries, system utilities, fonts, and so forth. But you could easily find 5,000 end user apps, many of which are competitive with the proprietary stuff being offered through the Apple Mac App Store. Oh, and free. Free as in cost, and all open source. (Not all Free by definition of the Free Software Foundation, though, but that's another topic entirely.)

Of course, what Apple has done that's unique shows Linux folks what we need to be better at doing: marketing, developer and ISV relations, and standardization. Lest you think I'm only hear to praise Linux or kick Apple, I'm not. Linux has had the raw tools to do this for a decade, but the communities and companies behind Linux have yet to gain enough momentum to pull this off on the desktop. Or the will to chuck tribal differences between desktops, toolkits, etc. and unify on one damn stack to attract the kind of developers that are filling up Apple's App Store. Canonical, bless their hearts, are trying — but it's unclear as of yet whether Canonical has enough pull to rally enough developers and inspire enough ISVs to drive even 100 paid desktop apps to Linux, much less 1,000.

The Linux community should get some credit here, though. What has been hard for the users of arguably the easiest operating system to use, has been easy for Linux users for years. A quick "apt-get update" and my entire system is updated, apps and all. A quick "apt-get install" and I can have everything from the Banshee media player to the latest Chrome release. Typing is not required, of course. Each distribution has GUI tools that make it very easy to install and manage applications.

And, it's important to add — I can do all this without the blessing of any single company. You see, while Apple controls everything that goes into the App store, nobody controls what users add to their APT, Yum, or Zypper repos.

So Linux users have had the tools and freedom, just a severe lack of marketing and developer relations smarts. That includes failing to have a single dominant toolchain (GUI toolkit, etc.) for companies to target. Seems that Nokia (with Qt) might be on to something here, though. It's pretty clear what the overall Linux community and vendors need to address, just a question if they do and if it's not too little and too late for any mainstream traction.

I do hope others in the tech press will at least, in passing, note that Apple has not invented something new with its App Store — merely taken an old idea and run with it better than the competition. Which, come to think of it, seems to be the company's specialty.​


----------



## ico (Jan 10, 2011)

Mac OS X's App Store is bust. I own a Mac and I had never created an Apple ID. App Store forced me to do that because I wanted to install Twitter 2.0 (formerly Tweetie) and I was not able to find any DMG for it.

If Apple makes this App Store thing COMPULSORY in Lion, I won't buy it.


----------



## Rahim (Jan 26, 2011)

*Buying a cheap laptop without Windows: Is it worth it?*

*www.thehindu.com/multimedia/dynamic/00349/IN09_LAPTOPS_349977f.jpg

A cheap laptop with a reasonable level of hardware, but no Windows operating system: bargain hunters can often find offers of this type on the internet. They are definitely worth checking out -- presuming you have the patience to work with alternative operating systems such as Linux and have time to perform installation.

The decision to forego Windows is the reason for the noticeably low prices of these offers. Manufacturers can save themselves the license fees that they otherwise must pay to Microsoft for each installation of Windows.

“The low-end laptop market in particular is so hard fought that manufacturers will grab for any dollars they can save,” explains Elmar Geese, chairman of the Linux association in Berlin. In place of Windows, the laptops come either without an operating system or use a pre-installed variant of the typically no-cost alternative operating system Linux.

For the user, that means a bit of extra work and acclimation.

Simply installing Windows from the old computer is generally not an option. Most Windows installations are tied by license to the computer with which they were sold.

Axel Pols from the German IT industry association Bitkom views devices without operating systems with some skepticism. Laptops have come to be so affordable that there’s no real need to go without an operating system, he says.

It’s important to look at more than just the price. “You should give careful consideration before making a purchase about how big your laptop should be and what it should be able to do,” says Pols.

Those considerations then form the basis of a purchase decision.

Does it absolutely have to be a Windows computer? No, says Geese.

“Mature Linux distributions like Ubuntu can now completely replace Windows.” Given that certain conventions have established themselves in recent years across all operating systems, there’s not even a great deal of acclimation needed. Macs, Windows, computers and even smartphones are all remarkably similar to use in many ways. “Linux distributions aren’t reinventing the wheel in this regard,” Geese says.

The decisive point for Linux laptops is finding the right “distribution.” To say a computer “works with Linux” is a misnomer, since there’s no such thing as a straight Linux version to be installed. Instead you have to opt between different software packages based on Linux, known as distributions.

Popular variants include for example Ubuntu, Open-Suse, Debian and Mandriva. All contain a variety of software to accompany their graphical user interface: a browser, an email program, a multimedia player and office packages are all on board from the start. Many distributions can be downloaded for free off the internet.

Many providers make it difficult for users to determine which distribution is actually installed on the laptop. For Thorsten Leemhuis, editor at Germany’s c’t computer magazine, this is an important weakness in these offers: “Just stating ‘Linux’ doesn’t help you at all,” he says.

Many devices come with a version of the Linux OS that doesn’t work on the device at all. One example is the Linpus distribution, a stripped-down variant that is primarily intended for the Asian market.

Normal users should probably stay away from those offers, Leemhuis recommends. “The interplay between computer and software has grown very complex. If they’re not optimally attuned to one another, you’ll quickly have problem.” Regardless of whether you’re using a Windows or Linux distribution, the key is a stable and sensible installation. If you are just looking for a machine that works, then it’s a better idea to avoid devices that still need a real operating system installed onto them.

The Hindu Article Link​


----------



## Rahim (Jan 31, 2011)

*The Linux Contradictionary*
Hudzilla 

I have just over a month left here at LXF Towers, so I'm busy clearing up my inbox, answering reader requests. One such request came in to put online the Linux Contradictionary, a side bar from the administeria section of LXF run by Dr Chris Brown, so here it is, in full:

Linux is full of jargon. Learn to bluff your way in this seemingly confusing universe with this handy guide.

*Crack *A mind-altering substance that confers upon its user the power to guess weak passwords.
*Cursor* A user expressing an opinion after quitting the editor without saving.
*Buffer* Terminator that’s found at the end of a line, indicating the need for a carriage return.
*Shell* The hardest part of Linux to crack, hence:
*Bash* Popular way to open a shell.
*Yum* What a Red Hat user says on being offered an ice cream.
*Apt-get* What a Debian user says on being offered an ice cream.
*Aptitude* Good skill at installing software packages.
*Cpio* The archiving droid from the first Star Wars movie.
*Getty* A Californian museum housing a vast collection of login programs.
*Password file* A file within Linux that doesn’t contain passwords.
*Libcurses-devel* The library of Hogwarts’ R&D department.
Ifup, ifdown Conditional tests in a shell script to judge the mood of the user.
Man[/B] Mechanism for reading documentation. Use only as a last resort.
*Woman* Mechanism for not reading documentation, even as a last resort.
*NetworkManager* A tool that undermines all efforts to have control over your various Ethernet interfaces.
*Perl* All-time winner of the obfuscated C competition.
*Script* Something you give to an actor before giving a program to the audience.
*BNF* (Backus-Naur Form) Notation for saying simple things in complicated ways.
*Vi* An ancient text editor invented by the Romans, otherwise known as ‘Six’.
*Ex* An even older editor; something you once loved but then went off in a big way.​


----------



## Krow (Mar 15, 2011)

*Ubuntu: Where Did the Love Go?*

Source: *itmanagement.earthweb.com/osrc/article.php/12068_3925641_1/Ubuntu-Where-Did-the-Love-Go.htm

By Bruce Byfield

When Ubuntu first appeared, the free and open source software (FOSS) community was delighted. Suddenly, here was a distribution with the definite goal of usability, headed by a former space tourist who not only understood computer programming but had the money to throw at problems.

The only objections were that Ubuntu was ripping off Debian, the source of most of its packages. For everyone else, Ubuntu and its parent company Canonical seemed everything FOSS had been waiting for.

Now, in 2011, that honeymoon is long past. Although Ubuntu remains the dominant distro, criticisms of its relationship with the rest of FOSS seem to be coming every other month.

What happened? Ubuntu supporters sometimes dismiss the change as jealousy of Ubuntu's success.

But, although that may be an element, the change in attitude is probably due chiefly to the gap between the expectations created by Ubuntu and Canonical in their early days and their increasing tendency to focus on commercial concerns.

Instead of being the model corporate member of the community that it first appeared, today Ubuntu/ Canonical increasingly seems concerned with its own interests rather than those of FOSS as a whole. No doubt there are sound business reasons for the change, but many interpret it as proof of hypocrisy. Added to the suspicion towards the corporate world that lingers in many parts of the FOSS community, the change looks damning, especially when it is so clearly documented in Canonical's corporate history.

*A Brief History of Canonical and Ubuntu*

After Ubuntu's first release in October 2004, Ubuntu/Canonical seemed in many ways a model FOSS entity. Nor was there much reason to doubt that initial sincerity. Shuttleworth, in particular, who was then the main speaker for both Ubuntu and Canonical, made considerable efforts to express support for other aspects of FOSS.

For example, Shuttleworth emphasized that "we all win, when Red Hat has a win." He made a special point of attending DebConf, Debian's annual conference, and of insisting that "Every Debian developer is also an Ubuntu developer" at a time when relations between Debian and Ubuntu were strained. 

However, even in the first years there were signs of isolationism. Ubuntu

Canonical insisted on using the proprietaryLaunchpad for development rather than existing free tools. Launchpad components did not begin to be released under free licenses until 2007, and the entire code was only released under the Affero GNU General Public License in 2009.

Similarly, in November 2006, Shuttleworth himself created controversy when he invited openSUSE developers to join Ubuntu. Although Shuttleworth later claimed that the offer was a response to Microsoft and Novell's cooperative agreements (Novell being openSUSE's corporate sponsor), it was widely condemned as an effort at corporate raiding unprecedented in the FOSS world, and Shuttleworth apologized a few days later.

However, the real turning point in Ubuntu/ Canonical policy appears to have been Shuttleworth's failure to convince other FOSS projects to coordinate their release cycles.

Shuttleworth first made the case in December 2006 that "it would be nice at the beginning of an Ubuntu release cycle to have a really confident picture of which projects will produce stable releases during those few months when we can incorporate new upstream versions. It would be even better if, during the release cycle, we knew immediately if there was a *change* in what was going to be released."

Over the next few years, Shuttleworth continued to stress the advantages of coordinated releases, arguing that it would allow centralized bug tracking, and suggesting that the cooperation might extend to common training materials.

The FOSS response, though, showed a distinct lack of interest. Many, including KDE's Aaron Seigo, saw the suggestion as squeezing projects into a uniformity that might not fit their needs. 

Faced with this unenthusiastic response, Shuttleworth used a keynote at the O'Reilly Open Source Convention in July 2008 to urge a different approach to cooperation, challenging the community to rival and surpass Apple in usability within the next two years.

Given Ubuntu's emphasis on usability and Shuttleworth's own interest in interface design, this challenge was not unexpected. It fit, too, into the growing interest in usability at the time.

However, the FOSS community saw no reason to focus on usability under Shuttleworth's leadership, or within his schedule.

When changes proposed by Ubuntu were slow to be accepted in GNOME -- some say out of hostility -- Shuttleworth began making interface changes to GNOME within Ubuntu. They were accompanied by the announcement of an elaborate new look for Ubuntu that included complicated color codes and a new font.

Then, faced with the choice of supporting these changes in an old version of GNOME or transferring them to GNOME 3.0, Ubuntu announced Unity, a shell for GNOME that was originally designed for netbook computers, would be its new desktop.

This growing tendency to develop in-house has been accompanied by other signs of insularity. As early as April 2006, Ubuntu replaced init, the standard bootup program in GNU/Linux with Upstart, largely to reduce start times. In much the same way, in November 2010, it announced the eventual replacement of Xorg, which provides the graphical interface, with the mostly unproven Wayland.

Since both init and Xorg are flexible enough to provide the sorts of improvements that Shuttleworth advocates, the suspicion is that such decisions are not technical, so much as political. That is, what concerns Ubuntu/ Canonical is not the technical merits of the applications, but its ability to dominate the projects that dominate its software stack. 

Other decisions that have negatively affected the perception of Ubuntu/ Canonical include the ending of Gobuntu, an Ubuntu variant that included only free software; a restrictive contributor's agreement; and the question of how affiliate fees for its online music store will be distributed.

The result of all these decisions and actions is an increasingly disillusioned perception of Ubuntu/ Canonical among some FOSS veterans. In 2008, kernel developer Greg Kroah-Hartman observed that despite its popularity, Ubuntu developers had contributed less than one percent of the patches to the kernel over the previous three years.

In much the same way, in August 2010, former Fedora community architect Greg DeKoenigsberg noted that only 1.3 percent of the code in GNOME 2.30 was attributable to Ubuntu. Currently, there is even a small group of regular critics who are sure to appear in the comments beneath any policy announcement made by Shuttleworth or his senior management. Despite its continued popularity among new users, at times Ubuntu has looked surprisingly like a pariah in the last few years.

*The Reason for it All*

This summary is enough to establish the change in Ubuntu's approach to FOSS. Yet what is missing is why it happened.

The idea that the changes are due to Jane Silber replacing Mark Shuttleworth as CEO is a tempting but incomplete explanation.

True, Silber is more business than FOSS oriented, and has emphasized projects like Ubuntu One, the cloud storage service, which is aimed more at corporate customers than individuals. But the change in attitude was happening long before Shuttleworth stepped down in December 2009 to focus on usability issues. Anyway, as founder, Shuttleworth remains active in decision-making. At most, the change in CEO only adds to an existing situation. 

One possible pressing reason for the change is that, so far as anyone outside the company can tell, after seven years, Canonical remains unprofitable. Any investors other than Shuttleworth may be understandably concerned for their investment, and this pressure is probably a growing influence on Canonical decision-making.

Moreover, even if there are no impatient investors, a company that is unable to show a profit after seven years risks being identified as a failure.

However, another reason may be Shuttleworth's own frustration over his interactions with the FOSS community. In Shuttleworth's first blog entry, he states that "successful open source projects are usually initiated by someone with a clear vision and also the knowledge to set about turning that vision into reality," and mentions his strong interest in interface design.

Could Shuttleworth's calls for uniform release cycles and a focus on usability be not just practical suggestions for improvement, but also a bid to become the dominant leader in FOSS?

That is not a question you can ask anybody without the interview turning immediately hostile, but its answer might help to explain the change in direction.

If this really was part of Shuttleworth's motivation (and I do him the courtesy of assuming that he might also have been idealistic), then the bid for dominance not only failed, but failed specifically in an area that was personally important to him. For most people, such failures would be more than enough to explain an increasing insularity -- if you can't dominate the greater world, why not carry out your plans at home, and prove that they work there?

We will probably never know all the reasons why Ubuntu/ Canonical changed from the embodiment of FOSS hopes to more of a business enterprise. Given the high expectations that Ubuntu initially raised, perhaps the disillusionment is inescapable -- and discourages me and everyone else from sufficiently acknowledging that, in day to day operations, the spirit of FOSS seems very much alive at Ubuntu / Canonical.

However, at the level of strategic planning, the fact that something has shifted is undeniable. For those of us who remember the initial expectations, FOSS can only seem poorer for the fact.


----------



## Rahim (Mar 27, 2011)

^I read this article but forgot to post it here. Thanks Krow 

*How Open Source Really Is Changing The World*

Alan Shimel​
Not in the ways you may think, but open source is an engine of change and freedom

It's Friday, I am looking forward to the weekend and wanted to stretch my wings today. So I am going to go off the reservation a bit and talk about world events, politics and yes, open source. We truly live in miraculous times.  I know it is easy to say, but really think about it.  In just under 25 years we have seen a new world order take hold. The Cold War dynamics that I and many of you grew up with passed almost the blink of an eye. The events of 9/11 moved us to a new era of terrorist threats that had many of us thinking about isolating ourselves from the world. But the call of freedom and democracy marches on. 

I sit in front of my TV and/or computer screen every day and am truly amazed at what I see going on in the Mideast and other parts of the world. It seems that some of the last bastions of dictatorship and oppressive ruling governments are crumbling before our eyes. I am not naive. I know that there are still other places and countries where change has to come. I also realize that that regime change doesn't guarantee that we will see true democracy take hold. But the scent of political change is in the wind.

Political change today is not being driven by guns and tanks and airplanes. Those are the tools of the old guard trying to keep the status quo.  The forces of change are harnessing technology to  effectuate change. The Internet, the social networks, the mobile net, they are all being wielded by a new generation who are using these tools to gain for themselves the freedoms they see their peers in the rest of the world enjoy.

What my generation tried to do with things like "Radio Free Europe" and "Radio Marti", is being accomplished almost effortlessly with Facebook, Twitter and the like.  The great thing is that it looks like once these technologies are set loose, the Genie is out of the bottle and can't be put back. People the world over are the same. They yearn for peace, prosperity and the pursuit of happiness.  When I was in school as political science major, it was fashionable to say that Eastern Europe and the other parts of the world were not democratic because the people there had no experience with freedom and democracy. They "were not ready" for freedom.  

The Internet has changed that in a really short time. It has made everyone ready for freedom and all that it promises. Yes governments can try to hit the "kill switch", they can harness the power of the net to keep their people in line and invade their privacy. They can even set up a "great firewall". But it will not stop people seeking freedom.

When we dig deeper at what is making all of this technologically induced change possible, we find open source at the core.  Open Source is empowering the net. God knows Bill Gates and Microsoft would not have embraced the Internet if Linux and other open technologies did not have the potential and were not eating their lunch. Google would not be what it is without open source. For that matter neither would Facebook and Twitter.

I am generally an optimist. I think the changes we are seeing in the world will result in freedom, democracy and peace becoming ever more prevalent in the world. Of course it could go the other way. But at the end of the day, we have technology, the Internet and Open Source to thank for this "change we can believe in".


----------



## Rahim (Apr 2, 2011)

*The Day Firefox Left IE in the Dust
*
[url="*www.linuxinsider.com/story/The-Day-Firefox-Left-IE-in-the-Dust-72149.html]Katherine Noyes[/url]​Firefox 4's victory is "just another sign that Microsoft is past its prime when it comes to generating excitement," opined Barbara Hudson, a blogger on Slashdot. "For decades users have internalized the 'upgrading Microsoft products can put you in a world of hurt' meme: 'What I've got works. Let someone else be the guinea pig.' Can you blame them?"

Those of us here in the free software community are almost always rooting for new open source products as they debut, but it's not often that we are as completely and thoroughly gratified as we were last week upon the launch of Firefox 4.

With headlines like "Firefox 4 thumps IE9 in first day download contest" and "Why Firefox 4 is winning the browser battle," it was hard to refrain from simply grinning continuously.

Linux Girl spent the majority of the week down at the blogosphere's Punchy Penguin bar, where FOSS fans took no pains to contain their exuberance -- or their perspectives on what Microsoft (Nasdaq: MSFT) did wrong. In the interests of posterity, she took it upon herself to record some of the conversational highlights. 


*'This Comes as Little Surprise' 
*
"I've been running Firefox 4 since b12, when they turned on acceleration for Linux, so this comes as little surprise to me," Hyperlogos blogger Martin Espinoza offered. "On the other hand, it remains to be seen whether IE9's new security features will be worth a slight reduction in speed.

"As a Linux and Windows XP user, it's fairly irrelevant to me; as an aficionado of numerous Firefox extensions, it is even more so," Espinoza concluded.

Indeed, it was "probably a bit of a hollow victory because Microsoft chose not to support XP for IE9, and that throws away large numbers," Slashdot blogger yagu pointed out. "I think it's a stupid move on Microsoft's part, but they're trying to establish the gentle nudge to all to move to their new flagship, Windows 7."

*'If Microsoft Is a Loser, They Earned It' 
*
Microsoft's problem with Internet Explorer "continues to be a mashup of earlier mistakes," yagu opined. "Amazing with all that Microsoft still commands the share they do."

In IE's early days, "Microsoft chose to embrace and extinguish HTML standards," he noted. "With MS clout they snagged huge market share and forced everyone's hand to develop to MS's flavor of browser."

As standards became more important and other browsers became bigger players, however, "IE6 fell out of favor because it was too hard to work with," yagu recalled.

Newer IEs subsequently introduced more compatibility "as Microsoft struggled to regain street cred," yagu continued. "At the same time, Microsoft ticked off a large audience of developers now stuck and tethered to the old standard. Microsoft has left a wake of dazed and confused developers and users while other 'brands' have stayed consistent and improved."

In short, "IE9 could be a great browser," yagu concluded, but "I couldn't care less. I have no inclination to even bother kicking its tires. If Microsoft is a big loser, they earned it."

*'Microsoft Is Past Its Prime' 
*
Firefox 4's victory is "just another sign that Microsoft is past its prime when it comes to generating excitement," opined Barbara Hudson, a blogger on Slashdot who goes by "Tom" on the site. "For decades users have internalized the 'upgrading Microsoft products can put you in a world of hurt' meme: 'What I've got works. Let someone else be the guinea pig.' Can you blame them?"

Those "still using IE as their main browser" may also "not be all that fussy about keeping their computers up to date," Hudson suggested.

Consultant and Slashdot blogger Gerhard Mack focused on privacy.

"I love it when Firefox manages to keep things fast, but I wish browsers would compete on privacy controls since both IE and Chrome both lack the ability to whitelist select cookies and delete the rest at browser close," Mack pointed out.

*'I Ignore IE When It Comes to Testing' 
*
Chris Travers, a Slashdot blogger who works on the LedgerSMB project, saw plenty to be cheerful about from a developer's perspective.

"I am positively surprised on a number of counts here," Travers began.

"First, I am pleasantly surprised that IE is progressing as quickly as it is," he said. "That Microsoft offers a more standards-compliant web browser makes life easier for those of us who build open source web apps.

"I still expect to use Chrome and Firefox side by side for most work, and ignore IE when it comes to testing (since I don't run Windows), but it's nice to know I don't have to write IE support off as quickly," Travers added. "That Firefox and IE share the same bugs on the ACID 3 test is good." 

*'A Lot More Intuitive' *

Travers was also "pleasantly surprised that Firefox 4 is learning from Chrome regarding the UI," he noted. "I think that the UI changes are quite positive and make the system a lot more intuitive."

The Firefox button, in fact, "is something Chrome might be able to learn from too, as it took me some time to figure out where to look for what would otherwise be menu items," he explained.

Chrome, indeed, seemed to be Firefox 4's primary competition, at least from the perspective of the bloggers Linux Girl heard. 

*Chromium-Based Competition 
*
Though he used Firefox for years, for instance, blogger Robert Pogson has now embraced Google's (Nasdaq: GOOG) browser instead. It's "incredibly fast, deals with almost all sites well, and combined with Google's Desktop makes a powerful tool for my desktop," he explained.

"While Google's Chrome is growing share at 50 percent per annum, those other browsers are losing share at the rate of 10 percent per annum," Pogson pointed out. "IE is cursed by M$, but Google's Chrome seems to have the right balance of features for me."

Similarly, "I'm typing this on FF4 and frankly this weekend I'll be moving away and saying goodbye to Firefox after many years of loyal usage," Slashdot blogger hairyfeet asserted. "Why? One word: Chromium." 

*'The Look, But Not the Functionality' *

Chromium-based browsers -- hairyfeet's favorite is Comodo Dragon -- "frankly kick the snot out of FF 4," he opined. "The Moz team may have ripped off much of the Chrome GUI (no file/edit/view, bookmarks on the right side, etc.), but sadly it is a case of cargo cult usability where they ripped off the look, but not the functionality."

Firefox, in fact, "seems to have lost its way," hairyfeet opined. "Instead of being the lightning fast browser you can customize, they keep adding more items like Sync instead of letting the users decide. Meanwhile Chrome just keeps on coming, faster and with better security."

So, "IE isn't what is gonna kill FF -- IE is dead, and tying IE9 to Vista and above simply made sure it is as good as dead," he concluded. "No, Chrome will be the one that puts the nail firmly in FF's coffin."

*A Community Effort *

Not everyone shared that opinion, however.

"The open source community has made Firefox a great success," Travers asserted, "not only through direct code contributions but also (and perhaps more importantly) add-ons like Webdeveloper and Noscript."

As a result, "Firefox is still one of the most important web browsers on the market from a web app developer's viewpoint," he concluded. "The developer share is important and made better due not only to robust developer tools but also solid support for standards."


----------



## Krow (Apr 4, 2011)

^Nice shares! I am still in the Firefox brigade though. 

*Is Android Open?*

Source: Is Android Open? | Epicenter*| Wired.com

By Scott Gilbertson

Google is famous in programming circles for redefining words to suit its ideas. 

Take “beta,” for example. Most of us take it to mean buggy, pre-release software that’s “mostly working, but still under test.” But Google uses the word to refer to a product that’s ready for general use but is subject to “regular updates and constant feature refinement.”

Now it’s happening again over the term “open.”

Andy Rubin, Google’s Senior Director of Mobile Platforms who oversees Android, gave a similar semantic shuffling to the word “open” in response to a slam by Steve Jobs. The Apple CEO stirred up a hornet’s nest of angry Android developers this week when he suggested, in a lengthy diatribe during an Apple press event, that Google’s mobile operating system was not really “open.”

Rubin responded by sending his first ever tweet, posting the code necessary to download the Android source and compile it on your PC and calling it “the definition of open.”

But whether Android actually qualifies as “open” in the purest sense is up for debate, since downloading and compiling code alone does not make a piece of software open. Bruce Perens, who coined the term “open source” and has been working on its behalf ever since, is suspect of Rubin’s definition.

“The fact that you can check something out and compile it doesn’t mean you have the right to use it,” Perens tells Wired.

In the software world, “open” can be defined around three core traits: a license that insures the code can be modified, reused and distributed; a community development approach; and, most importantly, assurance the user has total freedom over the device and software.

The Android OS is, in strictly legal terms, open source. Android is released under the Apache 2.0 software license, which allows anyone to use, modify and redistribute the code. But while it might meet the letter of the law, Android falls short on the other two points.

It’s the lack of community-based development that Android’s critics say makes it no more “open” than Apple’s locked-down, decidedly not-open iOS model. As Perens says, “most open source projects [include] instant access to changes as they are made … and an open door for anyone to participate.”

Unlike major open source projects like Firefox or the Linux kernel, you can’t see what’s happening behind the scenes with Android, nor can small developers contribute to the project in any meaningful way. Google typically releases major updates to Android at press conferences, not unlike those Apple uses to show off new iPhone features.

Once the code is released, Android developers can download it and do what they want with it, but they have no way of seeing what’s happening behind the scenes every day. If you want to know how Firefox changed last night — however esoteric those changes may be — you can study the changes on the Mozilla site. The same is true of the Linux kernel, Open Office and nearly every other open source project with a website.

It’s not true of Android. While Android may have the legal licensing to qualify as open source, it utterly fails on the equally important issues of transparency and community.

Android basically gives you two options: Accept what Google gives you, or fork the entire codebase. Other than the ability to roll your own version of Android, it’s really no different than iOS, which works on a similar “take what Apple gives you” model.

Facebook’s Joe Hewitt, the Firefox co-creator who is now rumored to be working on a Facebook-branded mobile OS based on Android, chimed in over Twitter. Hewitt says the lack of transparency in the Android development process makes it “no different than iOS to me,” adding, “open source means sharing control with the community, not show and tell.”

The next day, Hewitt followed up with a blog post clarifying his remarks.

“It kills me to hear the term ‘open’ watered down so much. It bothers me that so many people’s first exposure to the idea of open source is an occasional code drop, and not a vibrant community of collaborators like I discovered ten years ago with Mozilla.”

He also recommends people look at Google’s Chrome OS project, which is being run with a level of transparency and community involvement largely absent from Android, and which is a better representation, he says, of Google’s values.

Unfortunately, even if Google were to develop Android in the open, as the Mozilla foundation does with Firefox, it probably wouldn’t help Android be any more open.

While Google’s approach may be a disingenuous use of the word open — as Hewitt says, Google is doing “bare minimum to meet the definition of open” — there is another problem: the phone carriers.

“The problem is the wireless carriers first and Google second,” says Perens, “because Google enables the carriers to close the Android platform from the user’s perspective.” In other words, while you might be able to copy and paste the code from Rubins’ tweet and take a look at Android yourself, what arrives with actual phone is every bit as tightly controlled as iOS.

Just as there are jailbreaking hacks for the iPhone, there are root hacks for Android that attempt to give the end user some control back. That Android is less controlled by its Google parent in other ways — the Android Market, for instance, is not tightly regulated like Apple’s App Store counterpart — is a secondary benefit. Neither device is open in the sense that the end user can modify it as they see fit — customize it perhaps, but adding a new theme and downloading whatever apps you like are not the goals of open software.

The real goal of open software, as Perens and others have help define it over the years, is to ensure that you can do whatever you want with it. As anyone with an iPhone or and Android phone can tell you, that’s not the current state of affairs on either device. Nearly every smartphone on the market is tightly locked to its carrier’s specifications. There are a few exceptions, like the Nokia N900, which runs Maemo Linux.

The carriers argue that open phones would threaten the network. Steve Jobs argues that an open phone would threaten the user experience.

AT&T used to argue both of the same things during most of the 20th century, when it still maintained total control (what Jobs likes to call an “integrated” system) over land lines — you rented phones from AT&T or you didn’t have one. Decades after several massive anti-trust lawsuits and the breakup of Ma Bell, we’ve ended up back in a similar jam.

Even if there were a truly open source OS for your phone, it’s unlikely it would ever truly be open by the time it arrived in your hand.


----------



## Krow (Apr 8, 2011)

53 Open Source Replacements to Spice Up Your Desktop

By Cynthia Harvey

Source: 53 Open Source Replacements to Spice Up Your Desktop &mdash; Datamation.com

I've listed some of the replacements which caught my eye here. Please follow the link for all. 



> 2. *Launchy Replaces the Windows start menu* and other methods of launching apps
> 
> If you hate to use the mouse, Launchy is for you. It lets you open applications, documents, folders, bookmarks and more with just a few keystrokes. Operating System: Windows, Linux, OS X
> 
> ...



Interesting list, I like to keep my Windows 7 as FOSS based as possible. I'll try these softwares when I go back to using Windows for sure. Currently in love with 10.10.


----------



## Rahim (Apr 9, 2011)

^Nice share and using Windows is inevitable.


----------



## Krow (Apr 10, 2011)

^Thank you. Yes, as long as I am not using graphic designing software needed for some of my work, I am on ubuntu. 


*Android's problem isn't fragmentation, it's contamination*

Source: Editorial: Android's problem isn't fragmentation, it's contamination -- Engadget

By Vlad Savov

This thought was first given voice by Myriam Joire on last night's Mobile Podcast, and the simple, lethal accuracy of it has haunted me ever since. All the hubbub and unrest about whether Google is trying to lock Android down or not has failed to address whether Google should be trying to control the OS, and if so, what the (valid) reasons for that may be. Herein, I present only one, but it's arguably big enough to make all the dissidence about open source idealism and promises unkept fade into insignificance. 

Let's start off by setting out what the goal behind Android is. It'd be impossible to identify the flaw with Google's strategy if we aren't clear on what it's strategizing toward. From its very inception, Android has been about expanding the reach of Google search. Never mind all the geeky professions of wanting to build a great mobile operating system and one which Googlites themselves would want and be proud to use -- there's no reason to doubt the veracity of those proclamations, but they're symptomatic, a sort of nice side benefit, of the overarching business decision. Google makes its money by selling ads. It sells those ads by serving them up in front of its vast audience, which in turn comes to it primarily through the use of Google search. When faced with the rampant ascendancy of mobile internet use -- and Google deserves credit for identifying the oncoming smartphone craze in good time and reacting to it -- the company knew it simply had to maneuver its products into the mobile realm or face a slow, ignominious path to irrelevancy. Ergo, what Google was really and truly striving for with Android was ubiquity. Instead of having to dance to the merry tune of carriers -- as Microsoft is now having to do with Verizon in order to get it to bundle Bing on some Android devices -- or appease manufacturers' many whims, Google opted to build its own OS, with that specific aim of expanding availability as rapidly and as broadly as was possible.

 To say that the goal has been accomplished would be an understatement. Android has stormed every Symbian castle, ransacked every webOS village, threatened the mighty tower of Mordor iOS, and thoroughly resisted the upstart challenge of Windows Phone 7. The reasons for its success and universal acceptance have been twofold. Google has invested plentiful resources into expeditiously building up its Linux derivative for the mobile space, on the one hand, and has decided to make the fruit of that labor available to phone manufacturers without hindrance or demand -- to use as they pleased, for it was open and flexible, and while it wasn't initially beautiful to look at, it was a sturdy platform from which to build.

 Many have characterized the resulting melange of multivariate Android skins and devices as generating fragmentation within the OS' ecosystem. That may be true, but is not in itself problematic. If there were no qualitative difference between Android on an HTC device and Android on a Sony Ericsson phone, the end user wouldn't care. He'd call that choice.

 Where the trouble arises is in the fact that not all Androids are born equal. The quality of user experience on Android fluctuates wildly from device to device, sometimes even within a single phone manufacturer's product portfolio, resulting in a frustratingly inconsistent landscape for the willing consumer. The Sony Ericsson Xperia X10 is a loud and proud Android phone, but it features an older version of the OS, has had a checkered history with updates, and generally leaves users sore they ever picked it up. At the same time, Samsung's 10 million unit-selling Galaxy S is too an Android phone, one that Google can rightly be proud of. The most irksome example, however, is LG's Optimus 2X -- it has Froyo on board both in its European 2X garb and in its US-bound G2x variety, but the former crashes the browser any time you look at it, while the latter, eschewing LG's customizations and running the stock Android 2.2, is one of the slickest and smoothest devices we've handled yet. 

 The point is not that carrier or manufacturer customizations should be abandoned entirely (we know how much those guys hate standardization), it's that some of them are so poor that they actually detract from the Android experience. Going forward, it's entirely in Google's best interest to nix the pernicious effects of these contaminant devices and software builds. The average smartphone buyer is, ironically enough, quickly becoming a less savvy and geeky individual and he (or she) is not going to tolerate an inconsistent delivery on the promise contained in the word "Android." 


 It may seem odd for us to pick faults with an operating system in the midst of a world-conquering tour, but then you only need to look at Symbian's fate to know that fortunes change quickly in the breathlessly developing smartphone realm. All Google really needs to do to patch the cracks and steady its ship is to live up to those rumors of Andy Rubin ruling from above. Dump the X10s and 2Xs from the portfolio of real Android devices -- and Google can do that by denying them access to its non-open source products like Gmail, Maps, and the all-important Android Market -- and give us some respite from having to worry if the next Android will be a rampant robot or a dithering dud. Custom skins can still live on, but it's high time Google lived up to its responsibility of ensuring they're up to scratch before associating its mobile brand with their final product. Such a move may dent the company's valuable reputation as a do-gooder, but if it helps the even more valuable Android OS keep its course toward world domination, surely it'd qualify to be called a good thing in and of itself?


-------------------------------------




Now I don't agree completely with this. Although Google should ensure that certain quality standards are met when implementing Android on any phone, it will mean trying to control how the OS is customised by the manufacturers. The spirit of Open Source is free as in freedom. The manufacturers should have the freedom to implement their version of Android and end-users should have the freedom to root their device and install any version of Android they like. The coders at XDA forums are doing a great job IMHO.


----------



## Rahim (Apr 11, 2011)

Krow said:


> The spirit of Open Source is free as in freedom. The manufacturers should have the freedom to implement their version of Android and *end-users should have the freedom to root their device and install any version of Android they like.*


I agree on that. Users are at the mercy of the mobile companies and some are still shipping old versions


----------



## Krow (Apr 22, 2011)

Will GNOME 3.0 Repeat the User Revolt of KDE 4.0?

Source: Will GNOME 3.0 Repeat the User Revolt of KDE 4.0? &mdash; Datamation.com

By Bruce Byfield

In January 2008, the long awaited KDE 4.0 was released. Immediately, a user revolt erupted. 

KDE 4.0 was too radical a change, too lacking in features or stability, too much a triumph of developer's interests over user's -- the accusations seemed endless, and only began to quiet six months later when KDE 4.1 began addressing the shortcomings. Partly, the hostility continues to this day, although for many the KDE 4 series has long ago proved itself. 

This April, GNOME 3.0 is scheduled for release. Just as KDE 4.0 was a radical departure from KDE 3.5, so GNOME 3.0 is a radical departure from GNOME 2.32. But will its release trigger another user revolt? Or has the GNOME project -- perhaps learning from KDE's experience -- managed expectations well enough to prevent history from repeating itself? 

Certainly, GNOME has tried much harder to handle its own break with the past differently than KDE managed KDE 4.0. But the KDE revolt resulted from multiple causes, and, although GNOME has addressed some of those causes, the underlying problems of the project's relationship to its users remains in some ways disturbingly similar to those faced by KDE three years ago. 

*Anticipating Release Management Problems *

Like KDE 4.0, regular snapshots of GNOME 3.0 have been released as it developed. For the last couple of releases in the GNOME 2.0 series, a GNOME 3.0 preview has been available, although it has not received widespread attention until the beta release earlier this month. In fact, many users seemed unaware of the preview. 

However, in general, GNOME is showing much more caution than KDE did. KDE 4.0 was a developer's release that slipped into general circulation partly because of miscommunication and partly because of each distribution's wish to ship the latest software. Consequently, it was not stable or fully-featured, and, according to KDE developers today, was never meant to be. By contrast , GNOME 3.0 has been delayed twice. 

Both times, GNOME has explained the delay as a wish to perfect the software. When the release was rescheduled for September 2010 instead of the original April 2010 target, the explanation was that "our community wants GNOME 3.0 to be fully working for users." 

Similarly, when the release was postponed a second time, from September 2010 to March 2011, the official reason was to allow "adequate time not only for feature development, but user feedback and testing." Unlike KDE, GNOME has no intentions of producing a developer's release, or of justifying any complaints about lack of stability or features.

Since then, there has been an additional delay of a few weeks. Although no reason has been given for this new delay, slippage of this length is common in free software. It also gives time for the GNOME documentation and marketing teams to have a hackfest to prepare for the release. 

In fact, GNOME shows every sign of attempting to manage expectations and rumors about the release. For at least half a year, there has been a page dedicated to debunking rumors about GNOME 3.0. More recently, a website dedicated to the release has maintained a Common Questions and Answers page, as well as a home page that is as much a product sheet as anything you would see in a PDF file released by a commercial company. 

All in all, GNOME has made a considerable effort to keep users informed -- far more than KDE ever did in its similar circumstances. However, these efforts are not widely publicized. They were not brought to the attention of the journalists who write about free software -- instead, all users were left to discover these efforts for themselves. 

GNOME is apparently preparing publicity for the general release, but, considering that GNOME 3.0 has been two years in the making -- including one year of postponements to the original schedule -- a project with more experience in marketing might have tried harder and earlier to dispel rumors. As things are, GNOME's official marketing efforts may come too late to counter user's expectations and the misinformation that has been floating around the Internet. 

*Marketing the UnMarketable *

At any rate, despite the occasional delusions of marketing managers everywhere, the most expert campaign imaginable is not enough to completely counter what the audience considers unacceptable. 

Despite GNOME's efforts to publicize what it has been doing for the last two years, no one seems to have made a concerted effort to learn what users actually wanted in a desktop. 

True, GNOME has consulted usability experts, and tried to apply the theory to the building of a new and modern desktop. As Canonical may find in its development of the Unity shell, usability theory in software is often resisted by user's practice.

That is why, although the release's FAQ explains that design decisions were based on "an extensive literature review" and "stock usability principles and knowledge," the fact that only "a small usability study" was done in December 2010 -- after the interface was already well-developed-- is some cause for concern. 

The trouble is, a significant proportions of the users of any software are straightforward in their needs, and conservative about change. Apparently, many users -- at least vocal ones -- want little more from a desktop than a place from which to launch their applications. Since that was achieved years ago in GNOME, such users tend to resist anything more than minor changes in the interface. Even when innovations increase efficiency or productivity, a percentage of users will resist them precisely because they are new. 

Looking back at the KDE 4.0 release six months later, Aaron Seigo suggested that the release confronted users with more changes than they could handle at once. Much the same could be said about GNOME 3.0. 

GNOME can argue that the upcoming release offers "an overview at a glance," but people used to a single screen are still going to be upset by having to change to another one to view activities. Similarly, advertising "distraction-free computing" is not going to reconcile all users to not having applets in the panel -- and never mind whether their equivalent is available elsewhere. The kind of users I am talking about want what they know and know what they want. 

For such users, GNOME's invitation in the GNOME 3.0 FAQ to provide feedback is going to be ignored. Most of these users will never see the FAQ, and those who do have little sense of how to use a bug tracker, and no inclination to read up on GNOME design before making their suggestions. 

Moreover, if Jeff Waugh is correct, and even an established free software company such as Canonical can have trouble knowing who to talk to in GNOME, then what are the odds of average users finding the most effective way to contribute? 

By default, input will tend to be confined to the experts. Should average users succeed in registering their reactions, they are likely to risk being intimidated and ignored -- not necessarily out of any hostility towards them, so much as because most of them will not be strongly motivated to persist and the experts often lack time to educate them in effective advocacy. Projects that want the input of such users have to seek it out patiently, and, in this respect, GNOME seems only slightly better than KDE was. 

The result? Like KDE 4.0, GNOME 3.0 is a release that will please developers, with features that will delight programmers and frustrate many users -- in both cases, simply because the features are new.

Ten years ago in free software, developers and users could be assumed to be the same people, but those days are long gone. Today, an increasing number of users have a consumer mindset, and expect developers to deliver what they want. Unfortunately, since what such users apparently want is minor enhancements of what they already know, and developers understandably resent such demands. 

*A Historical Fugue *

Under these circumstances, GNOME 3.0 seems poised to have the same reception as KDE 4.0 did. Since GNOME has tried to mitigate possible responses, the reaction will no doubt be less extreme and less unreasonable. But, since GNOME has not eliminated all the reasons for the reaction, some sort of reaction will likely happen all the same. 

Already, a project called EXDE has been formed to continue development of the GNOME 2 series, just as Trinity KDE continues the KDE 3 series. In fact, EXDE has already existed for two months. Its existence may look like the effect coming before the cause, given that Trinity KDE took a couple of years to merge, but the very existence of EXDE is an omen that history is about to repeat itself. 

The repetition will not be exact. It never is. But it will be exact enough to emphasize the fact that the free software community is still struggling to define the exact relationship of users and developers. 

Admittedly, GNOME is trying much harder than KDE did three years ago to keep users informed and get them involved. Unfortunately, though, the indications are that these efforts will mitigate but not eliminate the consequences of moving faster than users can adjust.


----------



## Cool G5 (Apr 25, 2011)

I feel people should go steady with GNOME 3. Most would not be ready to jump on it unless it becomes a bit more workable. It should see a similar response like KDE 4 maybe a bit better.


----------



## Krow (Apr 26, 2011)

^Hmmm, I started using KDE after version 4, but I did fall in love with it (just see my username). Using KDE 4.6 now and I love it even more.

I guess Gnome 3 will take time to gain acceptance. It should get a chance to prove itself though.


----------



## Cool G5 (Apr 26, 2011)

Krow said:


> ^Hmmm, I started using KDE after version 4, but I did fall in love with it (just see my username). Using KDE 4.6 now and I love it even more.
> 
> I guess Gnome 3 will take time to gain acceptance. It should get a chance to prove itself though.



Never knew that K was due to your obsession with KDE  KDE has come a long way after the criticism fire from all sides. Its getting better and better with each passing version. I too was an early adopter of KDE. KDE 3.x was good, KDE 4.0 & 4.1 were passable but then the community has done a really good job with each release.


----------



## Rahim (Apr 28, 2011)

*'PC User' Doesn't Mean 'Windows User'*
Katherine Noyes

A Hunch study on 'Mac vs. PC' user personalities implicitly equates PCs with Windows. What about all those using Linux?

It's all too common in the popular press to see the assumption made that "PC" means "Windows PC."

Most mainstream discussions of Windows malware, for example, refer to it as "PC malware" and therefore an industry problem--conveniently sparing Microsoft any direct blame.

PC, however, is short for "personal computer"--a term that includes not just Windows computers but Macs and Linux computers as well. It may seem like a semantic quibble, but it has significant repercussions.

*'Computer-Savvy Gearheads'?*

To wit: Decision-making site Hunch last week published in an infographic the results of its most recent study on the personality differences between Mac and PC users. Implicit in that analysis, once again, was that "PC users" are on Windows--there's even a Windows icon used to represent them.


The study is full of all kinds of interesting and provocative results, such as that Mac users are younger, more liberal, more urban, more educated and generally more interesting than PC users are. Particularly bizarre, too, is that Mac users were found to consider themselves as "computer-savvy gearheads" more often than PC users were; Macs, in fact, are most notable for their attempt to protect users from the inner working of their machines.

In any case, the Hunch study shines a direct spotlight on many misperceptions and misunderstandings about computers and the operating systems that run them. Let's look at just a few of them.

*1. 'PC' != Windows*

First off, the term "PC" includes Macs, so that's a poor term to use for distinction.

Second, given the diversity of computing environments today, it is no longer accurate to assume that someone on a non-Mac PC is using Windows. Linux users are growing rapidly in number, and I doubt most would categorize themselves in the same group as Windows users.

That, indeed, is probably at least part of the reason a full 23 percent of respondents to the Hunch study didn't classify themselves in either the PC (Windows) or Mac camps: the two camps are neither well-defined nor comprehensive, since they leave out Linux users altogether.


Hunch also didn't specify, as far as I can tell, whether it was including mobile technologies such as tablet PCs. If it was, that opens up a whole other can of worms--not to mention Linux-based Android.


*2. Few Choose Windows*

Another assumption implicit in the Hunch study is that those who do use Windows do so by choice.

It certainly seems true that Mac users choose their platform, by and large, and it may be true in some cases for Windows users, too. Microsoft still holds such a monopoly over non-Mac PCs, however, that most people get Windows on their machine whether they want it or not. Windows is everywhere, unfortunately, and so Windows users are too, simply by default.

That situation is improving, to be sure--just recently, in fact, it's become clear that Microsoft is getting its operating system onto fewer and fewer of the new computers that ship. Most recently, more than a third are shipping without it.

Still, it's mistaken to assume that Windows users are Windows users by choice. Most simply go with what comes on their hardware. That being the case, I'm not sure you can draw many conclusions about personalities based on the fact that they use Windows.

*3. What About Linux?
*

Most notable of all about the Hunch data, however, is that it completely ignores the third big contender in the operating-system arena: Linux.

Certainly, Linux users are still a minority--if you're not counting Android, especially. They are a growing contingent, however--as recent Wikipedia visitor data can attest--and I think Hunch's 23 percent non-response group underscores that fact.


What's had me thinking over the past few days is how Linux users would compare, had they been recognized by Hunch and allowed to respond as a group. Would they say that "talking about computers is akin to struggling with a foreign language," the way "PC" users did? I don't think so.

How do you think Linux users' responses would compare? Please share your thoughts in the comments.


----------



## Krow (Apr 30, 2011)

Right now, I'm a windows user. After rebooting, I will become a linux user.


----------



## Cool G5 (May 1, 2011)

'PC' != Windows = This won't change for sometime as there is not much of Linux penetration in India considering one gets windows for free(pirated) when he/she gets his new computer. Again people don't like changes.


----------



## Krow (May 2, 2011)

Why Is Ubuntu's Unity Squeezing out GNOME 3?

By Matt Hartley

Not too long ago, I wrote about Ubuntu's embrace of the Unity desktop and what that would mean for Ubuntu users who might prefer a traditional GNOME shell. 

 At the time, I was called out by some readers regarding my belief that Ubuntu was limiting itself with its choice in relying on Unity. Now as we approach Ubuntu 11.04, it looks as if I might have been right all along. 

 While users can certainly select the older GNOME shell, the move to the Unity desktop has clearly not been greeted with unanimous applause.

*Unity is not GNOME 3*

One fact that ought to be made clear from the start is that in the name of Ubuntu seeking to make Unity their default desktop experience, the development team has indeed locked some users into a singular desktop experience. "But Matt, that's nonsense! Users can install any desktop they choose! Besides, if they want GNOME 3 instead, users can just add the PPA repository for it!" 

The above statement is what I feel makes this entire thing surrounding Unity so amusing. In the Ubuntu development team's desire to make Ubuntu more "accessible," they're actually assuming new users even realize other desktop environments are possible. 

Newsflash – most of the newer users I encounter have no idea that another desktop is even an option. 

This means when a less informed Ubuntu user sees the GNOME 3 provided shell on distributions such as Fedora, they may find themselves making the switch away from Ubuntu. While this matters little to the community at first glance, longer term this only adds more fragmentation to the community at large.

_”Unity,” indeed._

*Is Unity even worth it?*

I know of many people who feel strongly that Unity is the next logical step for Ubuntu. And it's entirely possible that the Unity desktop could be well received by most people. That's something we'll have to wait and see how it turns out. 

 From my perspective, however, I think it's not only going to be a massive disaster for the existing user base, but I'm skeptical as to the value Unity will deliver in the first place. Then again, the same could be said about the default shell provided by GNOME 3 on other Linux distros.

*Xfce is looking great these days*

Speaking for myself, I'll almost certainly be selecting the Xfce desktop, as I've had enough of GNOME and Canonical. The dumbing down of the Linux desktop environment is bordering on insane. 

 Half of me finds this entire process amusing, while the other half is getting tired of Ubuntu making the Linux community look foolish with heavy-handed motives. Unity is to Linux what the Windows XP UI was to Windows users. It feels like some dumbed down, "Fisher Price" experience gone terribly wrong. 

 Some may disagree with me on this, but I stand by my opinion. Given more time, Unity could actually become something very useful for netbooks and tablets. But in its current incarnation, it just stinks.

*I don't hate Ubuntu*

I have no problem with Ubuntu per se. Much of what their developers have done has been nothing short of amazing. They've taken the magic that is Debian and created a powerful community around it. This is commendable.

As for the Unity desktop, it has the potential to become a mature alternative to what most of us are accustomed to. And yet I still argue that it's nowhere near ready for prime time, either in stability or in general layout, but that is up to each individual. 

 At its very best, I see Unity as a being a great option for netbooks and later on for tablets. But on the mainstream desktop PC for casual Linux users? No way, it's never going to work. 

 If I do use Ubuntu on any of my desktop machines, I'll be using the distro with another non-GNOME desktop. Between GNOME 3's shell leaving out the minimize option, to the lack of customizability from Canonical's Unity desktop, I just can't take it anymore. The entire GNOME 3 shell vs. Unity debate has me spinning in circles.

*Work-a-rounds vs real user choice*

As previously mentioned, I explained how one could add a PPA repository to basically allow for the removal of the Unity desktop and replace it with the traditional GNOME 3 shell.

Some users might even feel good about calling this a solution to the whole Unity alternative issue. Wrong. 

 What's actually possible is for users to either default back to an older GNOME experience or remove Unity in its entirety instead. Using the PPA solution is a hack, nothing more. 

 Some of you might be quick to point out that, to a degree, the Linux desktop is basically a series of hacks. This may be true, but this isn't the position that Canonical's taking with their push to offer Unity. Unity alternatives shouldn't require people to hack past the default Ubuntu desktop.

*Opportunity and compromise*

To prevent this piece from becoming nothing more than a spotlight on a problem with Ubuntu, allow me to suggest a compromise. 

 When I visit Ubuntu.com, I'm instantly bombarded with the feeling that it's a "Unity experience" only. Why not make a greater effort in highlighting some of the Ubuntu derivatives? 

 The derivatives page provides some great GNOME desktop alternatives. Yet what I find annoying is that the derivatives webpage link is at the bottom of the homepage footer in the smallest text possible. No one is ever going to think to look there for an alternative to the Unity desktop. 

 This is a real shame, considering many might otherwise like the Ubuntu core but wish to try a different approach to the desktop environment.

Canonical feels strongly about using GNOME 2.x with their Unity desktop shell. Yet many existing users will find themselves less than impressed with the limits placed within the Unity experience. Why not provide plenty of detail about the desktop environment alternatives out there? 

 I don't mean buried in the existing website, but during the installation. Why not mention that if the Unity thing isn't working out, users can indeed try some of the great derivatives using the Ubuntu core? I simply don't see this as being such a big deal that derivatives can't be given more of the spotlight. 

 After all, if Unity goes over as badly as I think it might, wouldn't it be a good idea to have some alternative desktop environments at the ready?

*Desktop split testing*

Another possibility is that in addition to more outward support for Ubuntu derivatives, Canonical might have been wise to try some desktop split testing. Wouldn't it have made sense to have done testing with both Unity and GNOME 3's shell? Surely if GNOME 3 is so devoid of what the users want, testing both options would only serve to shore up Canonical's views?

*Final thoughts*

 I want to reiterate something that will likely come up later. First, I enjoy using Linux on my desktop and have been an Ubuntu user for many years. I'm thrilled to see the Ubuntu developers take a stab at something new, even if I find it to be painful to use. Where I have a gripe is in the fact that the developers based their desktop on a very singular view. Others will disagree and that's perfectly fine with me.

Lastly, if split testing is something that just doesn't make any sense as it requires too many resources, why not at least give the Ubuntu derivatives some extra emphasis? What better way to absorb any potential PR fallout pointing to alternative desktop options using the Ubuntu core?

At the end of the day, nothing covered here is the end of the world. If Unity is a smashing success, I will be thrilled for all those involved. But if I'm right and this becomes the Windows Vista of the Ubuntu experience, I fear that not utilizing at least some of my suggestions above will yield some nasty blow-back for everyone involved.


------------------------


I agree with the author's conclusion. I like the fact that the Ubuntu devs tried something new, but if Unity breaks when we install Gnome 3, then what is the point?


----------



## sygeek (May 13, 2011)

^Err..Guys please instead of copy-pasting the whole article, just mention the source OR just mention a part of the article with the link to the source for the full details. 
@Krow:  don't quote the whole humongous post


----------



## Krow (May 14, 2011)

Well, this is the articles thread and hence it is full of articles. The point of posting the article here is that you don't need to click again to read it.


----------



## sygeek (May 15, 2011)

*Ubuntu 11.04 Unity Keyboard Shortcuts and Tricks*

*Ubuntu 11.04 Unity Keyboard Shortcuts and Tricks*​By Marius Nestor​

*The new Ubuntu 11.04 (Natty Narwhal) operating system introduced a different user interface, designed by Canonical, called Unity. With it, the Ubuntu development team also released some keyboard shortcuts for easy usage.*

With this article we want to inform our readers who use Ubuntu 11.04 with Unity about some very useful and helpful mouse tricks and keyboard shortcuts.

With these tricks and shortcuts, users should familiarize with Unity and find it very useful and user friendly.

Without any further introduction, we will let you now test the following mouse tricks on your brand new Ubuntu 11.04 (Natty Narwhal) operating system. Remember, you must use Unity!

*Unity Mouse Tricks*:

If you want to maximize a window - drag the window to the upper screen edge
If you want to restore a window to it's original position - outside of the menu duble-click the panel
If you want to maximize a window in height - middle-click on the maximize button
If you want to maximize a window in width - right-click on the maximize button
If you want to place a window on the left side - drag the window to the left screen edge
If you want to place a window on the right side - drag the window to the right screen edge

And now, some very useful keyboard shortcuts that will make your life much easier when using the Unity interface.

*Unity Keyboard Shortcuts*:

Windows key - shows the Unity launcher
Windows key + [number] - activates or opens the corresponding applications in the Unity launcher
Windows key + Shift + [number] - opens the corresponding applications in the Unity launcher if it's already running
Windows key + T - opens the Trash
ALT + F1 - puts focus on the Unity launcher and you can use the arrow keys to navigate
ALT + F2 - opens the run a command dialog
CTRL + ALT + T - opens a terminal window
Windows key + A - opens the Applications widget
Windows key + F - opens the Files & Folders widget

*Window Management Keyboard Shortcuts*:

Windows key + D - maximizes or restores all windows
CTRL + ALT + Numpad key 1 - places a window in the lower left corner
CTRL + ALT + Numpad key 2 - places a window in the lower half corner
CTRL + ALT + Numpad key 3 - places a window in the lower right corner
CTRL + ALT + Numpad key 4 - places a window in the left half of the screen
CTRL + ALT + Numpad key 5 - centers or maximizes a window
CTRL + ALT + Numpad key 6 - places a window in the right half of the screen
CTRL + ALT + Numpad key 7 - places a window in the upper left corner
CTRL + ALT + Numpad key 8 - places a window in the upper half corner
CTRL + ALT + Numpad key 9 - places a window in the upper right corner
CTRL + ALT + Numpad key 0 - maximizes a window
*Workspace Management Keyboard Shortcuts*:

Windows key + W - shows all desktops and windows (Expo Mode)
CTRL + ALT + Up Arrow - navigate to the workspace above
CTRL + ALT + Right Arrow - navigate to the workspace on the right
CTRL + ALT + Left Arrow - navigate to the workspace on the left
CTRL + ALT + Down Arrow - navigate to the workspace below
CTRL + ALT + SHIFT + Up Arrow - move a window to the workspace above
CTRL + ALT + SHIFT + Right Arrow - move a window to the workspace on the right
CTRL + ALT + SHIFT + Left Arrow - move a window to the workspace on the left
CTRL + ALT + SHIFT + Down Arrow - move a window to the workspace below

We do hope that with this article you've learned something useful and that it will make your life much easier with the new Unity interface of the Ubuntu 11.04 (Natty Narwhal) operating system.







*5 issues that could derail Google's Chromebook​* By Paula Rooney​
As a longtime observer of Linux, I, too, am excited about Chromebooks’ prospects on the business desktop.

I agree wholeheartedly with my colleague’s positive assessment of its chances — Google’s attractive pricing and packaging, security measures and brand name will no doubt boost Linux’s stature in the desktop/laptop world, finally.  Another core value — the Chromebook’s ability to serve as a hub and on ramp to Software-as-a-Service (SaaS) applications — makes the Linux PC a far more compelling alternative to Windows than past Linux desktop operating systems.  

But I’ll toss in five issues that will also no doubt present challenges for Google’s latest open source operating system (and Android as well):

*1. Timing*
If it is as successful as the Chrome browser, the ChromeOS will enjoy a nice pickup in market share in no time. But its debut comes at an awkward moment,  when business users are beginning to make the leap from the netbook to the iPad or tablet. In the last month or so, I’ve heard numerous reports that corporate purchasing agents are putting in orders for Apple iPad 2s.  Even the Motorola Xoom — which runs Google’s other other open source Linux OS, Android 3.0 — is getting a lot of attention because of its ultra portable form factor. Is the Chromebook a little late?

*2. Mac Attack*
It is breaking news that a Mac is no longer anathema in the corporate world. My brother is an IT guy and is taking iPad configuration orders for his business users on a daily basis now.  It’s the first time in his career that POs are getting okayed for anything other than a Windows desktop or laptop. My SO — who has a mega collection of new and older Windows PCs and laptops in office– was also told by the brass at that billion-dollar company to get an iPad 2 immediately. These are the kind of real world indicators that matter.  Can the Chromebook or Android tablet, for that matter, curtail Apple’s rise in the business computing world?

*3. Marketing Issues*
 The beauty of open source is freedom and choice. Even Google is giving its audience of users a choice between two open source operating systems — Android and ChromeOS.  But will this present a conflict for users — a fear of betting on the wrong horse? It’s hard to say at this point. I, for one, have a DroidX and am looking at the tablet as my next choice. Should I go with a Motorola Xoom or an Acer ChromeOS? This will be tricky for Google’s marketing arm.  Another point: If Google aims to go after the business market, it must sign up a Dell or IBM to launch Chromebooks.

*4. Fragmentation Issues*
I guess i can accept the fact that Google will open source Android 3.0 when it is ready.  But I wonder how Google intends to run that open source project — and this ChromeOS open source project — going forward. There are practical considerations that must be taken into account, especially the needs of device manufacturers. But Linux is Linux, and the rules of the GPL must be respected in order to maintain continued innovation and growth. Keeping developers happy is essential for the growth of Google-targeted applications and innovation on the OS front itself.  Linux backers applaud Google’s success, but their patience won’t last forever.

*5. Quality issues*
I love using my Droid and DroidX, particularly since both devices run the Android open source operating system. But I do run into quality snags here and there that my fellow iPhone users do not seem to experience. Sometimes the touch pad does not work properly. Sometimes the device starts dialing numbers wildly. Sometimes it takes a long time for the OS to load up. I have talked to analysts to determine whether these are my bads, but am told that these issues are well known to Google. These are major headaches and ones that Google must resolve quickly. Will these quality issues take a back seat as Google tries to build a ChromeOS following? Google’s focus on quality– for both Android and the ChromeOS —  will be paramount here, especially as Apple makes headway in the business market. Both operating systems must run spectacularly, and bugs and security holes must be fixed quickly. Google already has problems with one of those operating systems. Why should I expect  these quality issues to disappear with two to support in house?


----------



## Krow (May 15, 2011)

^You should take a little care while pasting. 



			
				SyGeek said:
			
		

> Kick off your day with ZDNet's daily e-mail newsletter. It's the freshest tech news and opinion, served hot. Get it.


Serves no purpose. 

Nice share though. Thanks for posting.


----------



## sygeek (May 15, 2011)

Oops, sorry..removed.


----------



## Rahim (May 16, 2011)

^Nice beginning SyGeek  Remember, its an 'interesting articles' thread and not educational 

@Krow: Please don't use 'quote' as it ruins the flow of the page.


----------



## Krow (May 19, 2011)

^Roger that. Edited previous posts.


----------



## Rahim (May 20, 2011)

*The Linux vs. Windows Security Mystery*
Katherine Noyes


*"NSA recommending Vista for home security is merely a reflection of the reality of monopoly in the retail space," said blogger Robert Pogson. "In the USA probably as few as 2 to 3 percent of users use GNU/Linux, so a recommendation is almost useless." Those who are serious about security "are already aware of SELinux, a product of the NSA. The NSA is merely recommending that folks move on from XP, a poor OS poorly supported by M$."*

Of all the many winning advantages Linux has in its favor, security is surely one of the more widely known examples.

Why else, indeed, would we see security experts in mainstream publications recommending it over Windows for online banking purposes?

That, indeed, is part of the reason it was so disappointing to see Linux get completely ignored in a recent NSA report entitled "Best Practices for Keeping Your Home Network Secure."

The report is filled with various suggestions oriented toward Windows and Mac users -- just as one would expect, given that they're by far the majority today. What stands out, though, is that for Windows users, the NSA simply recommends upgrading to Windows 7 or Vista, making no mention at all of the far-more-secure Linux option that's available.

More than a few ripples were created in the waters of the Linux blogosphere. 


*'NSA Says No to Linux'*

Some interpretations seemed truly bizarre.

"NSA Best Practices Recommend Windows Over Linux For Security" read one headline on ITProPortal, for example.

Similarly, "NSA says no to Linux in best practice advisory" read another on TechEye.

‎ This, despite the fact that Linux wasn't mentioned at all in the NSA report. 

*'What a Twist of Words'* 

 Bloggers, as per their wont, made note of that fact quickly.

"Wow what a twist of words," wrote Ken in the comments on the ITProPortal story, for example. "The NSA article does not even mention Linux. What the NSA article says is this: 'Both Windows 7 and Vista provide substantial security enhancements over earlier Windows workstation operating systems such as XP.'

"So the NSA is really saying that the newer Windows is better than the old Windows. Duh!!!!" Ken added.

‎It wasn't long before PCWorld weighed in with an indignant, "Windows Vista for Better Security? I Don't Think So," and the conversation took off from there.

Down at the blogosphere's Punchy Penguin saloon, Linux Girl was bombarded with comments.

*'Merely a Reflection of Reality'* 

"NSA recommending Vista for home security is merely a reflection of the reality of monopoly in the retail space," blogger Robert Pogson offered. "In the USA probably as few as 2 to 3 percent of users use GNU/Linux, so a recommendation is almost useless."

Those who are serious about security "are already aware of SELinux, a product of the NSA," Pogson added. "The NSA is merely recommending that folks move on from XP, a poor OS poorly supported by M$. Folks who would heed that advice probably do not even know GNU/Linux exists."

It is "possible that some of M$'s donations may also have suppressed mention of GNU/Linux," Pogson concluded. "But who knows?" 

*'The Security Swiss Cheese of XP'* 

Consultant and Slashdot blogger Gerhard Mack took a similar view.

"You can't knock them too badly," Mack agreed. "The best numbers I have seen show Linux at half the numbers of Apple (Nasdaq: AAPL) -- a small number to begin with."

The NSA "has sponsored Linux security projects in the past, so they are definitely not anti-Linux," he pointed out.

Vista, meanwhile, "brought along some features to allow more apps to run as non-administrator and some features (UAC) to annoy people who buy products from people who can't be bothered with good security patches," Mack added. "Win 7 is just a more stable/less annoying Vista, and I'll take either of them over the security swiss cheese of XP."

So, "I'm with the NSA on this one because the sooner XP is just a memory, the better off we all are," Mack concluded. 

*'You Need to Know What You're Doing'* 

"The problem with Linux is you really need to know what you're doing for it to be secure," asserted Slashdot blogger hairyfeet.

The NSA's recommendations, then, are "no surprise, as they know that 99.995 percent of the population is not CS grads or kernel hackers or programmers," hairyfeet opined. "These people will NEVER use CLI -- hell, Windows' control panel scares them. You honestly think they are gonna learn Bash?"

Hyperlogos blogger Martin Espinoza wasn't so sure. 

*'Irresponsible at Best'* 

"When I see the federal government recommend the products of one of its actual constituents, I am annoyed but not surprised," Espinoza told Linux Girl in a link-filled email. "Remember when Bush's boy Ashcroft gave Microsoft (Nasdaq: MSFT) a free pass after the DOJ found that they had illegally abused their monopoly position? (And have you noticed where Ashcroft is now?)

"It comes as no shock to see the NSA failing to promote Linux when the federal government is clearly a friend to Microsoft, and vice versa," he said.

"And let us not forget the well-foreshadowed speculation that Vista may contain an NSA back door," Espinoza pointed out. "Since there is no way for an independent reviewer to know that the code they are reviewing is what is actually being distributed with Windows or via Windows (or Microsoft) Update, clearly it is irresponsible at best to utilize Windows in any case where security is important." 

*'NSA - New Spending Authority'* 

Barbara Hudson, a blogger on Slashdot who goes by "Tom" on the site, wondered about the target audience for the NSA's report.

"Home users will never even see this, never mind read it," Hudson explained. "Business users? If they haven't switched by now, a pdf bearing the NSA's imprimatur isn't going to count for a hill of beans next to the considerations of software that can't be migrated from XP, or the costs and time of migrating desktop users to a new version.

"Besides, most of those installations will be taken care of over the next few years by simple attrition or migrating the users to tablets," she added.

"So who *was* the real target audience? I would have to say it's the boss of whoever at the NSA ordered this written, to 'show they're doing something' so they can justify their paycheck," Hudson suggested. "After all, haven't your tax dollars always been used for NSA -- New Spending Authority?

"Now please excuse me," she concluded, "while I go tell the neighbors that those black helicopters are just a coincidence."​


----------



## sygeek (May 26, 2011)

*Linux: The Source of All Desktop Innovation​*

I’m going to make a statement that, at this point, should be obvious to every nerd on the planet… but I still feel needs to be stated:

*Almost every ounce of real innovation happening in the world of Desktop Operating Systems… is happening on Linux.* Microsoft and Apple have completely dropped the ball and ceased to innovate in any real way in this area.

Over the last few months I’ve had the pleasure of reviewing (with the Linux Action Show) some truly interesting new software releases, including: Ubuntu’s new desktop environment “Unity“, KDE 4.6, and Gnome 3.

These are exciting, interesting desktop environments that seek to improve upon (and, in some cases, completely modify) the basic “Computer Desktop” paradigm that we’ve been using and building upon for so many years.  With Gnome 3 and Unity making sweeping changes with the goal of providing a computing experience that is accessible and easy to use for all people.

It should be noted that I can’t stand using either Gnome 3 or Unity.  And, even that might be putting it a tad too lightly.

But, just the same, I have immense respect and admiration for what the teams behind these desktop environments have achieved.  They have thrown out the conventional wisdom of what a “desktop” is and built the user experiences that they felt needed to be built.  And that is awesome (and valuable).

What’s happening on the other side of the fence?  What have Microsoft and Apple been up to?

What new, truly innovative ideas has either brought to the desktop in the last several years?

Case in point:

Apple has announced the core new features of their next big update to MacOS X, code-named “Lion“.  To summarize the new features, Apple used the phrase “Back To The Mac”.  The implication being that they are taking the user interfaces from their iPhones and iPads… and shoehorn them into MacOS X.

Some of the big new and exciting changes?  Some applications can run full screen and your desktop can now be a row of icons that launch applications (just like iOS devices).

I’m not kidding.  Those are two of the top demo-able, new features.  Features taken from a mobile OS (iOS) and forced into a desktop OS.  And, worth noting, these features have existed for very, very long time.

And these are some of the most radical and exciting new features MacOS X has had to offer since its initial release over 10 years ago.  This, also, is not a joke.  (To be fair, Windows has seen a similar glut in user interface innovation during that same time.)

Compare that with Gnome 3, which includes actually useful notifications and messaging and messaging and dynamically expanding virtual workspaces.  Not to mention an extraordinarily scriptable (via JavaScript) and modifiable user experience that is simply not possible on Windows or MacOS X.  And that’s just the tip of the iceberg.

The end result is a desktop environment that is significantly easier to pick up and learn, for people often intimidated by computers, than Windows or MacOS X.  Add to that the fact that Gnome 3 is also significantly more customizable for power users than either… AND significantly more innovative…

It makes you wonder.  What, exactly, are the folks in Redmond and Cupertino doing?  Two of the largest companies on the planet with, literally, billions of dollars at their disposal… and neither has come up with any big new ideas to significantly improve the usability of their desktop OS’s in the last decade?

Perhaps both companies are simply too big and too slow to really innovate in this area.  Perhaps they simply don’t want to.  Or, perhaps it has something to do with the development model being used.

Whatever the reason, I’m immensely happy to see that several projects are pushing things forward, trying new ideas out to make our desktops even cooler and more powerful (and usable).

And, I almost hate to say this… but I’m glad some projects are taking the Desktop in directions that I wouldn’t go myself.  The end result is more diversity, more new ideas and, in the end, far cooler desktop environments.


----------



## Cool G5 (Jun 1, 2011)

I feel the big co's R&D so much that it takes time to innovate. By the time they even implement it, it is already out in Linux arena. Linux on other hand has been proactive for sometime now & I've never seen it so active. If they continue in the same vein, it should not take time to increase the world linux share.


----------



## sygeek (Jun 1, 2011)

*Learning a new language too often means working around the tutorials​*By Andrew Binstock​
Every year as I wade into reviewing books for the Jolt Awards, I am forced to slog through several language tutorials. Over the years, I've found them to be reasonably well written, although rarely more than this. That is, they're accurate, and a determined reader can certainly learn enough from them to code in the target language. But I am hard-pressed to be more complimentary for even the best tutorials.

These books are marked by common failings that greatly frustrate their simple mission. The first and by far most-common handicap is a confusion by the authors in which they conflate a tutorial with a detailed treatise on every aspect of the language. For reasons not quite clear, the latter approach seems generally favored by the publishing industry, resulting in almost ridiculous volumes. I have in front of me a 1480-page "introduction" to Java. The notion that anyone would read that many pages as a primer defies credulity. Predictably, the book descends into the AWT and Swing, pops up to diverge through tributary libraries, disappears into the VM, and finally ends up in parallel programming, after which it thankfully comes to an end. Given that this particular volume is the 4th Edition of this title, I have to believe that it represents the ultimate in refinement of the author's intended model. It's really a hybrid tutorial-reference work. And my guess is that since it weighs 4½ pounds, it serves primarily in the latter capacity. Mark Lutz's Learning Python is that language's equivalent of this book. Other tutorials, such as the "Pickaxe book," which brought Ruby to the attention of the Western world, unabashedly follow this model. The front half is tutorial, the back half is marked as reference. This approach works far better. And the quality of the Ruby tutorial, despite its length (418 pages), justifies the popularity the book has enjoyed.

Another common failing is that the authors forget what readers most want to do when learning a new language; namely, to write small working programs to familiarize themselves with the syntax. Frequently, tutorials present endless tiny snippets of code that illustrate a feature — without showing a single useful program. This tendency is greatly exacerbated if the language has a built-in shell/interpreter, as is the case with Ruby, Groovy, and Scala.

For example, the Scala tutorial by Odersky, Spoon, and Venners (Programming in Scala) is a fair representation of the problem. In the first 200 pages, I was able to find only one example longer than 20 lines of code; the majority being less than 10. (In K&R, which I'll come back to in a moment, the first 20-line program appears on page 26.) The result, as I experienced it, was that after a lot of reading and typing, I knew a little bit about a lot of features, but still had not written a single program that actually did something useful. To be fair, this problem is strikingly common. I have tutorials on Clojure and Groovy before me that proceed exactly the same way.

Finally, a pet peeve that I find frequently in poorly edited tutorials: the desire to show off the language by presenting clever tricks or small hacks it can accomplish. Personally, I want to learn, rather than watch linguistic parlor tricks. This weakness shows up most frequently in tutorials for functional languages and for those, such as Groovy, where the syntax is purposely designed to be better than a competing language (in this case, Java).

The truly frustrating aspect of all three limitations is that there has long existed a brilliant tutorial to serve as a guide: Kernighan and Ritchie's C Programming Language book (K&R). Judging it beside other tutorials, the differences are immediately visible. Let's start with the obvious things: the tutorial portion of K&R is a mere 177 pages. This is followed by 40 pages of appendices, which serve as a highly abbreviated reference section. At this modest length, any reader can work through the book and do all the examples. Secondly, most of the programs are longer than 20 lines, do something mildly useful and familiar, and are complete. Even 125 pages into K&R, an illustrative program has a main() function. We're not talking snippets here, we're looking at real, albeit short, programs. Finally, the explanations are lucid and they build incrementally on each other. They do not present a scattershot of features. There is a pervasive sense of sequence; and at all times the reader knows where he is in relationship to the ground covered and the ground yet to be traversed.

What is missing from K&R? Most clearly, it lacks a section that explains every function in the standard library. Java books should drop this nonsense, too. (The definitive books on all the arcana of Java, including the library and solving knotty problems, is the lapidary Core Java, Volumes I and II, by Horstmann and Cornell. But at a combined 1800 pages, they do not comprise a tutorial — nor, I might add, do they claim to).

The second thing K&R omits is spoon feeding. You have to think as you work through it. All the information is there, but you're forced to engage the language through the examples to get what you need. The authors expect you to be an attentive reader. As a result, you can move quickly through the language because the book supports you, rather than forcing you to read pages that add little to your comprehension.

One can argue that C is a small language and therefore concision comes naturally to an introductory text. I would invite you to examine the other tutorials on C and see whether you can find another that's as short and well written as K&R. Or try JavaScript, which is also a small language, and likewise lacks for short, lucid tutorials.

I want to make clear that the books I named were chosen with care because they are, in my view, the best tutorials for their respective languages. Odersky et al. on Scala, Thomas on Ruby, Lutz on Python — these are the best starting points for their respective languages. I only wish they'd followed K&R's model more closely.


----------



## sygeek (Jun 17, 2011)

*Things You Can't Do With a GUI: Finding Stuff on Linux​*


Spoiler



What's better, a graphical interface or the Linux command line? Both of them. They blend seamlessly on Linux so you don't have to choose. A good graphical user interface (GUI) has a logical, orderly flow, helps guide you to making the right command choices, and is reasonably fast and efficient. Since this describes a minority of all GUIs, I still live on the command line a lot. The CLI has three advantages: it's faster for many operations, it's scriptable, and it is many times more flexible. Linux's Unix heritage means you can string together commands in endless ways so they do exactly what you want.

What's better, a graphical interface or the Linux command line? Both of them. They blend seamlessly on Linux so you don't have to choose. A good graphical user interface (GUI) has a logical, orderly flow, helps guide you to making the right command choices, and is reasonably fast and efficient. Since this describes a minority of all GUIs, I still live on the command line a lot. The CLI has three advantages: it's faster for many operations, it's scriptable, and it is many times more flexible. Linux's Unix heritage means you can string together commands in endless ways so they do exactly what you want.

Here is a collection of some of my favorite finding-things command line incantations.

*File Operations*

In graphical file managers like Dolphin and Nautilus you can right-click on a folder and click Properties to see how big it is. But even on my quad-core super-duper system it takes time, and for me it's faster to type the df or dh commands than to open a file manager, navigate to a directory, and then pointy-clicky. How big is my home directory?

```
$ du -hs ~
748G    /home/carla
```
How much space is left on my hard drive or drives? This particular incantation is one of my favorites because it uses egrep to exclude temporary directories, and shows the filesystem types:

```
$ df -hT | egrep -i "file|^/"
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/sda2     ext4     51G  3.6G   32G  11% /
/dev/sda3     ext4    136G  2.3G  127G   2% /home
/dev/sda1     ext3    244G  114G   70G  63% /home/carla/photoshare
/dev/sdb2     ext3     54G  5.8G   45G  12% /home/carla/music
```
What files were changed on this day, in the current directory?

```
$ ls -lrt | awk '{print $6" "$7" "$9 }' | grep 'May 22' 

May 22 file_a.txt
May 22 file_b.txt
```
Using a simple grep search displays complete file information:

```
$ ls -lrt | grep 'May 22' 
-rw-r--r-- 1 carla carla 383244 May 22 20:21 file_a.txt
-rw-r--r-- 1 carla carla 395709 May 22 20:23 file_b.txt
```
Or all files from a past year:

ls -lR | grep 2006

Run complex commands one section at a time to see how they work; for example, start with ls -lrt, then ls -lrt | awk '{print $6" "$7" "$9 }', and so on. To avoid hassles with upper- and lower-case filenames, use grep -i for a case-insensitive search.

Want to sort files by creation date? You can't in Linux, but you can in FreeBSD. Want to specify a different directory? Use ls -lrt directoryname.

Which files were changed in the last three minutes? This is quick slick way to see what changed after making changes to your system:

find / -mmin -3

You can specify a time range, like what changed in the current directory between three and six minutes ago?

find . -mmin +3 -mmin -6

The dot means current directory.

Need to track down disk space hogs? This is probably one of the top ten tasks even in this era of terabyte hard drives. This lists the top five largest directories or files in the named directory, including the top level directory:

```
$ du -a directoryname | sort -nr | head -n 5
119216208	.
55389884	./photos
40650788	./Photos
37020884	./photos/2007
20188284	./carla
```
Omit the -a option to list only directories.

*Biggest Files*

It is well worth getting acquainted with the find command because it can do everything except make good beer. This nifty incantation finds the five biggest files on your system, and sorts them from largest to smallest, in bytes:

```
# find / -type f -printf '%s %p\n' |sort -nr| head -5

1351655936 /home/carla/sda1/carla/.VirtualBox/Machines/ubuntu-hoary/Snapshots/{671041dd-700c-4506-68a8-7edfcd0e3c58}.vdi
1332959240 /home/carla/sda1/carla/51mix.wav
1061154816 /proc/kcore
962682880 /home/carla/sda1/Photos/2007-sept-montana/video_ts/vts_01_4.vob
962682880 /home/carla/sda1/photos/2007/2007-sept-montana/video_ts/vts_01_4.vob
```
You really don't need to include the /proc pseudo-filesystem, since it occupies no disk space. Use the wholename and prune options to exclude it:

find / -wholename '/proc' -prune -o -type f -printf '%s %p\n' |sort -nr| head -5

There is potential gotcha, and that is that find will recurse into all mounted filesystems, including remote filesystems. If you don't want it to do this then add the -xdev option:

find / -xdev -wholename '/proc' -prune -o -type f -printf '%s %p\n' |sort -nr| head -5

Another potential gotcha with -xdev is find will only search the filesystem the command is run from, and no other filesystem mounts, not even local ones. So if your filesystem is spread over multiple partitions or hard drives on one computer, and you want to search all of them, don't use -xdev. I'm sure there is a clever way to distinguish between local and remote filesystems, and when I figure it out I'll share it.

Now let's string together a splendid find incantation to convert those large indigestible blobs of bytes into a nice readable format:

```
# find / -type f -print0| xargs -0 ls -s | sort -rn | awk '{size=$1/1024; printf("%dMb %s\n", size,$2);}' | head -5


1290Mb /home/carla/sda1/carla/.VirtualBox/Machines/ubuntu-hoary/Snapshots/{671041dd-700c-4506-68a8-7edfcd0e3c58}.vdi
1272Mb /home/carla/sda1/carla/51mix.wav
918Mb /home/carla/sda1/Photos/2007-sept-montana/video_ts/vts_01_4.vob
918Mb /home/carla/sda1/photos/2007/2007-sept-montana/video_ts/vts_01_4.vob
918Mb /home/carla/sda1/Photos/2007-sept-montana/video_ts/vts_01_1.vob
```
es, I know, you can do many of these things in graphical search applications. To me they are slow and clunky, and it's a lot faster to replay searches from my Bash history, or copy them from my cheat sheet. I even have some aliased in Bash, for example I use that last long find incantation a lot. So I have this entry aliased to find5 in my .bashrc:

alias find5='find / -wholename '/proc' -prune -o -wholename '/sys' -prune -o -type f -print0| xargs -0 ls -s | sort -rn | awk '{size=$1/1024; printf("%dMb %s\n", size,$2);}' | head -5'

In this example I have excluded both the /proc and the /sys directories.

*Fast Locate Command*

The locate is very fast because it creates a database of all of your filenames. You need to update it periodically, and many distros do this automatically. To update it manually simply run the updatedb command as root. locate and grep are powerful together. For example, find all .jpg files that are 1024 pixels wide:

locate *.jpg|grep 1024

Search for image files in three different formats for an application:

locate claws-mail|grep -iE "(jpg|gif|ico)"

Well here we are at the end already! Thanks for reading, and please consult the fine man pages for these commands to learn what the different options mean.


----------



## Rahim (Jun 17, 2011)

Linux command line is rubbish


----------



## socrates (Oct 2, 2011)

*How Ubuntu is built: the inside story*

Behind the scenes with Ubuntu's community manager.   How Ubuntu is built: the inside story | News | TechRadar UK


----------



## lywyre (Oct 3, 2011)

*Re: How Ubuntu is built: the inside story*

Bookmarked it. Reading it later and thanks for the link.


----------



## abhijangda (Oct 3, 2011)

*Re: How Ubuntu is built: the inside story*

thanks for sharing!!


----------



## Skyh3ck (Oct 3, 2011)

*Re: How Ubuntu is built: the inside story*

Few years back I got 10 CDs from Cononical freely.. Started using ubuntu in one of the my spare 20 GB HDD as installing it to the same HDD as Windows was hactic for me as I was not with Partition..  

I email to Mark Shuttleworth suggesting why can't Ubuntu be installed in a single partition like windows do..... I got reply from his secretary that my email has been reached to Mark and he discussed it with the development team...... And will try to make it happen.....  



Really this is true I still have saved that email reply.......


----------



## Krow (Oct 4, 2011)

Merged that thread with this one as more than news, it is an interesting article on OSS.

@socrates: Please create threads in appropriate sub-forums. Not all threads are for news sections.


----------



## Rahim (Oct 4, 2011)

I seriously think posters should stop posting news or off-topic things in this thread. This is all about interesting articles on FOSS.

Mods please clean this wonderful thread.


----------



## Krow (Oct 5, 2011)

^That post by socrates is not news. It is an interesting article on how Ubuntu is built. Did you read it?


----------



## Krow (Nov 8, 2011)

*Disunity​*By Robert Storey​
_Please accept my resignation. I don't care to belong to any club that will have people like me as a member._ (Groucho Marx)
* * * * *
My name is Robert, and I'm a distroholic. For the longest time, I was in denial. But as any AA member will tell you, the first step to dealing with an addiction is to know you have one. And to admit it publicly. Which is why I am here today.
What a long, strange trip it has been, ever since that fateful day in 1998 when I installed SUSE Linux. I remained loyal to her for all of six months. But soon I was having an affair with Red Hat. And then I found Slackware. I had a long and satisfying fling with Libranet, now sadly deceased. Then, in despair, I embraced FreeBSD and OpenBSD, only to abandon both of them for Debian. I'm sure there were others, but I've forgotten their names.
And suddenly, in 2005, stability came into my life, when I first put a free "ShipIt" disk from Canonical into my CD drive and rebooted. Yes, the first one was free, but soon I was hooked. My friends tried to warn me: "Don't be seduced just because she's easy," they said. But I wouldn't listen. "It's got a graphical installer!" I exclaimed. "And my mouse just works, without having to manually edit file /etc/X11/XF86Config!" I was in love.
My tryst with Ubuntu lasted over five years. But the relationship - often rocky at times - began to sour. And when Oneiric arrived last month, I knew it was over.
I cried - at least they said I did - when I recently booted a Debian disk, typed mkfs.ext4 /dev/sda1, hit enter, and watched my Ubuntu installation disappear in a digital puff of smoke. Gone, but not forgotten.

*Unity: Ubuntu's Waterloo?*

OK, before someone tells me to put a sock in it, I'll cut the crap and get to the point. I'd been having issues with Ubuntu for a long time, but still remained loyal. I figured that these travails would eventually be worked out, and besides, the competition wasn't any better. However, the seriously misnamed "Unity" was the proverbial straw that broke the camel's back.
Now yes, I know, Unity is not required to run Ubuntu. Indeed, I was one of the loudest voices proclaiming to Unity-haters that they could simply go with GNOME-Shell, or Kubuntu, Xubuntu and Lubuntu. Not to mention that aside from the official *buntu interfaces, buried within the bowels of the repositories are numerous other worthy desktop environments: IceWM, Enlightenment-17, FVWM-Crystal, Fluxbox, or for the truly hardcore, Ratpoison (thus named because it kills your mouse). So really, if you don't like the look of Unity, the solution is just an "apt-get install my-favorite-desktop" away. Right?
Unfortunately, it's not that simple. The biggest problem with Unity is not that it has become the new Ubuntu default desktop. The problem is that Unity development seems to be sucking up Canonical's resources, to the detriment of everything else. Bugs have been creeping into Ubuntu, and they are not getting fixed because the developers apparently have no time. Over the years I have reported a number of bugs myself on the Launchpad web site, and in the beginning I recall that developers were very prompt to examine the reports and come up with solutions. I was quite surprised, and pleased, the first time I reported a bug and received a polite email from an Ubuntu developer the next day asking for more details. I replied, and within a week the bug was fixed.
But that was then and this is now. More recently, bug reports have sat in the overworked developers' inboxes for months while numerous users add additional comments saying how they too have run into the same issue. Occasionally, these bugs are real show-stoppers - everything from an inability to get online to frequent system lock-ups. Many serious bugs are now officially "triaged," which means that the developers will get to it if and when they can, but don't hold your breath waiting.
Perhaps it would be worth putting up with the bugs if Unity was the greatest thing since sliced bread - something wonderful that is going to revolutionize desktop computing. But it's not. I tried Unity, and it's kind of cute, but nothing to write home about. If it vanished from the face of the Earth, I wouldn't particularly miss it. Perhaps it will be of some value in the future if someone manufactures an Ubuntu Pad (uPad?), but for now I just want something that works well on my conventional computer. And Ubuntu is no longer stable or fast enough to fit my needs.

*Where do we go from here?*

I started out this rambling essay saying how I am (was) a "distroholic." That is to say, my early days of Linux Geekdom were spent jumping from distro to distro. Ubuntu gave me stability for five years, but now I am back to distro-hopping again. I am actually enjoying this experiment, and seeing how many quality distros now exist, I feel a bit like a kid in a candy store. I have three computers in my possession, each one running something different. Two of the three are Debian-based (AntiX and Linux Mint "Debian"), while the other (Salix) comes from the Slackware universe. I haven't yet bothered to set up any of my machines to double (or triple) boot, but that's a possibility too. So maybe by next week I'll be playing with six distros, or nine.
I view this as a competition. And may the best distro win.


----------



## Rahim (Feb 11, 2012)

*Time to dispel open source myths, says Liam Maxwell​*Link


Government director of ICT futures says open source has grown up and Whitehall is ready to buy.

Open source and open standards are the direction for UK government IT, the civil servant leading the government's technology change agenda has said, reports The Register.


Liam Maxwell, Cabinet Office director of ICT futures, said that open source has grown up and it's time to dispel lingering misconceptions about this technology and development process.


Maxwell told the Intellect 2012 conference in London: "Open source software is not three guys in a shed anymore. There are a lot of misconceptions about open source but open source is the future model for delivering IT."


He was speaking the day before the Cabinet Office opens a three-month period of consultation on open standards to be used in the government's G-Cloud initiative.


G-Cloud is intended to establish a series of frameworks on software, hardware and services and on purchasing to help deliver IT more effectively and reduce costs across government.


The three-month consultation process is intended to take input on open standards that would underpin G-Cloud. Maxwell promised the consultation is an important part of G-Cloud saying it has the "same authority" as the consultation on a new airport. "We are serious about this," Maxwell said.


The consultation comes as the government prepares to announce which IT vendors will be G-Cloud certified. More than 600 companies are reported to have expressed an interest in the framework.


Underscoring the government's interest in open source, Maxwell said that last week he'd accompanied Cabinet Office minister Frances Maude, overseeing the government's digital transition, on a tour of Silicon Valley tech companies working with open source and big data – Cloudera, specializing in the Google inspired Hadoop data munching framework, and MongoDB specialist 10Gen. Maxwell also introduced his ministerial boss to cloud software infrastructure specialist Joyant and eBay's payment arm PayPal.


"We have a minister who really gets this," Maxwell said. "That's where the future is moving. It's moving to a new model of service and delivery, it's big data and big data is going to be open source. We are going to spend a lot of time looking into that. If we move to being one common government we need open source," he said.


The idea is to move away from what Maxwell called "black-box" contracts involving big IT vendors to more agile systems delivered by small and medium sized enterprises. The thinking seems to be SME equals open source and open standards, while big means the same old proprietary vendors.


"For years we spent on IT systems built for bureaucrats, they were not built for people," he said.​


----------



## Zak (Jul 5, 2012)

Rahim said:


> Linux Sucks!!!
> Posted by tuxxie Friday, July 17, 2009
> 
> Linux is gaining momentum and people are starting to switch over to this computer operating system. I have been using GNU/Linux for years and would like to warn you about it. My conscience wouldn't allow me not to speak out about the OS. Linux is a free operating system that anyone can download and use.
> ...



Cool.......


----------



## Rahim (Jul 7, 2012)

^Your welcome


----------



## aidenrock (Jan 8, 2013)

i read your topic this is helpful and creative.....


----------



## nisargshah95 (Feb 24, 2015)

*Re: How Ubuntu is built: the inside story*

One should read this book for one's amusement - The UNIX Haters Handbook. Its pretty old by standards but the authors point out (in an amusing manner) the problems faced by unix users long ago .


----------



## Theodre (Feb 14, 2016)

*Re: How Ubuntu is built: the inside story*



nisargshah95 said:


> One should read this book for one's amusement - The UNIX Haters Handbook. Its pretty old by standards but the authors point out (in an amusing manner) the problems faced by unix users long ago .



Downloaded! Will read when I am having some spare time 
By the way - You guys shouldn't miss Linux Sucks 2016 by Bryan Lunduke.
&quot;Linux Sucks&quot; - 2016 - YouTub

Not an article but hope this counts? :grin_NF:


----------

