# The Geeks Daily



## sygeek (May 30, 2011)

This thread is meant for sharing interesting articles related to Technology and Geekism on a wide base such that it doesn't fit in Technology news section, Random news section or the OSS article thread.

Now, *The Rules*:

Please *don't copy-paste the entire article if the site's Terms and Conditions doesn't allow it*. A link with a summary of the article in quotes would be better.

If the site doesn't have any "Terms and conditions" or it allows for article to be fully published (with a link to the site), then you are free to paste the entire article. 

Keep in mind while pasting a full article that it should be under *SPOILER tags* - |SPOILER][/SPOILER|.

Custom written articles can be posted here too. Add a *[Custom] Tag *to topics of such posts.

Please send *trackbacks* to the site whose article you are using in the post.

Discussions related to a corresponding article is allowed unless and until it sticks to the topic. 

Off-topic posts and Posts not following the corresponding site's T&C will be immediately reported to the Mods.


----------



## sygeek (May 30, 2011)

*CPU vs. The Human Brain*

*CPU vs. The Human Brain​*


Spoiler



_The brain's waves drive computation, sort of, in a 5 million core, 9 Hz computer._

Computer manufacturers have worked in recent years to wean us off the speed metric for their chips and systems. No longer do they scream out GHz values, but use chip brands like atom, core duo, and quad core, or just give up altogether and sell on other features. They don't really have much to crow about, since chip speed increases have slowed with the increasing difficulty of cramming more elements and heat into ever smaller areas. The current state of the art is about 3 GHz, (far below predictions from 2001), on four cores in one computer, meaning that computations are spread over four different processors, which each run at 0.3 nanosecond per computation cycle.

The division of CPUs into different cores hasn't been a matter of choice, and it hasn't been well-supported by software, most of which continues to conceived and written in linear fashion, with the top-level computer system doling out whole programs to the different processors, now that we typically have several things going on at once on our computers. Each program sends its instructions in linear order through one processor/core, in soda-straw fashion. Ever-higher clock speeds, allowing more rapid progress through the straw, still remain critical for getting more work done.

Our brains take a rather different approach to cores, clock speeds, and parallel processing, however. They operate at variable clock speeds between 5 and 500 Hertz. No Giga here, or Mega or even Kilo. Brain waves, whose relationship to computation remains somewhat mysterious, are very slow, ranging from the delta (sleep) waves of 0-4 Hz through theta, alpha, beta, and gamma waves at 30-100+ Hz which are energetically most costly and may correlate with attention / consciousness.

On the other hand, the brain has about 1e15 synapses, making it analogous to five million contemporary 200 million transistor chip "cores". Needless to say, the brain takes a massively parallel approach to computation. Signals run through millions of parallel nerve fibers from, say, the eye, (1.2 million in each optic nerve), through massive brain regions where each signal traverses only perhaps ten to twenty nerves in any serial path, while branching out in millions of directions as the data is sliced, diced, and re-assembled into vision. If you are interested in visual pathways, I would recommend Christof Koch's Quest for Consciousness, whose treatment of visual pathways is better than its treatment of other topics.

Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run, though the present articles deal with one such case. Neurons can spontaneously generate synchrony in large numbers due to their intrinsic rhythmicity.

Nor are neurons passive input-output integrators of whatever hits their dendrites, as early theories had them. Instead, they spontaneously generate cycles and noise, which enhances their sensitivity to external signals, and their ability to act collectively. They are also subject to many other influences like hormones and local non-neural glial cells. A great deal of integration happens at the synapse and regional multi-synapse levels, long before the cell body or axon is activated. This is why the synapse count is a better analog to transistor counts on chips than the neuron count. If you are interested in the topics of noise and rhythmicity, I would recommend the outstanding and advanced book by Gyorgy Buzsaki, Rhythms of the Brain. Without buying a book, you can read Buzsaki's take on consciousness.

Two recent articles (Brandon et al., Koenig et al.) provide a small advance in this field of figuring out how brain rhythms connect with computation. Two groups seem to have had the same idea and did very similar experiments to show that a specific type of spatial computation in a brain area called the medial entorhinal cortex (mEC) near the hippocampus depends on theta rhythm clocking from a loosely connected area called the medial septum (MS). (In-depth essay on alcohol, blackouts, memory formation, the medial septum, and hippocampus, with a helpful anatomical drawing).

Damage to the MS (situated just below the corpus collosum that connects the two brain hemispheres) was known to have a variety of effects on functions not located in the MS, but in the hippocampus and mEC, like loss of spatial memory, slowed learning of simple aversive associations, and altered patterns of food and water intake.

The hippocampus and allied areas like the mEC are one of the best-investigated areas of the brain, along with the visual system. They mediate most short-term memory, especially spatial memory (i.e rats running in mazes). The spatial system as understood so far has several types of cells:

Head direction cells, which know which way the head is pointed (some of them fire when the head points at one angle, others fire at other angles.

Grid cells, which are sensitive to an abstract grid in space covering the ambient environment. Some of these cells fire when the rat is on one of the grid boundaries. So we literally have a latitude/logitude-style map in our heads, which may be why map-making comes so naturally to humans.

Border cells, which fire when the rat is close to a wall.

Place cells, which respond to specific locations in the ambient space- not periodically like grid cells, but typically to one place only.

Spatial view cells, which fire when the rat is looking at a particular location, rather than when it is in that location. They also respond, as do the other cells above, when a location is being recalled rather than experienced.

Clearly, once these cells all network together, a rather detailed self-orientation system is possible, based on high-level input from various senses (vestibular, whiskers, vision, touch). The role of rhythm is complicated in this system. For instance, the phase relation of place cell firing versus the underlying theta rhythm, (leading or following it, in a sort of syncopation), indicates closely where the animal is within the place cell's region as movement occurs. Upon entry, firing begins at the peak of the theta wave, but then precesses to the trough of the theta wave as the animal reaches the exit. Combined over many adjacent and overlapping place fields, this could conceptually provide very high precision to the animal's sense of position.

*2.bp.blogspot.com/-7nZyWHhlL-I/TeEbryZg3pI/AAAAAAAAASA/BOiFBAXBGZM/s1600/Triangle-place-cells.png
_One rat's repeated tracks in a closed maze, mapped versus firing patterns of several of its place cells, each given a different color_.​
We are eavesdropping here on the unconscious processes of an animal, which it could not itself really articulate even if it wished and had language to do so. The grid and place fields are not conscious at all, but enormously intricate mechanisms that underlie implicit mapping. The animal has a "sense" of its position, (projecting a bit from our own experience), which is critical to many of its further decisions, but the details don't necessarily reach consciousness.

The current papers deal not with place cells, which still fire in a place-specifc way without the theta rhythm, but with grid cells, whose "gridness" appears to depend strongly on the theta rhythm. The real-life fields of rat grid cells have a honeycomb-like hexagonal shape with diameters ranging from 40 to 90cm, ordered in systematic fashion from top to bottom within the mEC anatomy. The theta rhythm frequency they respond to also varies along the same axis, from 10 to 4 Hz. These values stretch and vary with the environment the animal finds itself in.

*2.bp.blogspot.com/-AzVmlQYAsOo/TeEbrhl2FcI/AAAAAAAAAR8/w5WeXmuG9EE/s320/Screen+shot+2011-05-24+at+9.13.09+AM.png
_Field size of grid cells, plotted against anatomical depth in the mEC._​
The current papers ask a simple question: do the grid cells of the mEC depend on the theta rhythm supplied from the MS, as has long been suspected from work with mEC lesions, or do they work independently and generate their own rhythm(s)?

This was investigated by the expedient of injecting anaesthetics into the MC to temporarily stop its theta wave generation, and then polling electrodes stuck into the mEC for their grid firing characteristics as the rats were freely moving around. The grid cells still fired, but lost their spatial coherence, firing without regard to where the rat was or was going physically (see bottom trajectory maps). Spatial mapping was lost when the clock-like rhythm was lost.

*3.bp.blogspot.com/-qTZJen3i4d8/TeEbqhiaP1I/AAAAAAAAAR4/vrFwSuJp6dk/s400/F1Refiged.jpg
_One experimental sequence. Top is the schematic of what was done. Rate map shows the firing rate of the target grid cells in a sampled 3cm square, with m=mean rate, and p=peak rate. Spatial autocorrelation shows how spatially periodic the rate map data is, and at what interval. Gridness is an abstract metric of how spatially periodic the cells fire. Trajectory shows the rat's physical paths during free behavior, overlaid with the grid cell firing data._

_ "These data support the hypothesized role of theta rhythm oscillations in the generation of grid cell spatial periodicity or at least a role of MS input. The loss of grid cell spatial periodicity could contribute to the spatial memory impairments caused by lesions or inactivation of the MS."_​
This is somewhat reminiscent of an artificial computer system, where computation ceases (here it becomes chaotic) when clocking ceases. Brain systems are clearly much more robust, breaking down more gracefully and not being as heavily dependent on clocking of this kind, not to mention being capable of generating most rhythms endogenously. But a similar phenomenon happens more generally, of course, during anesthesia, where the controlled long-range chaos of the gamma oscillation ceases along with attention and consciousness.

It might be worth adding that brain waves have no particular connection with rhythmic sensory inputs like sound waves, some of which come in the same frequency range, at least at the very low end. The transduction of sound through the cochlea into neural impulses encodes them in a much more sophisticated way than simply reproducing their frequency in electrical form, and leads to wonders of computational processing such as perfect pitch, speech interpretation, and echolocation.

Clearly, these are still early days in the effort to know how computation takes place in the brain. There is a highly mysterious bundling of widely varying timing/clocking rhythms with messy anatomy and complex content flowing through. But we also understand a lot- far more with each successive decade of work and with advancing technologies. For a few systems, (vision, position, some forms of emotion), we can track much of the circuitry from sensation to high-level processing, such as the level of face recognition. Consciousness remains unexplained, but scientists are definitely knocking at the door.



*Rebooting*​


Spoiler



You'd think it'd be easy to reboot a PC, wouldn't you? But then you'd also think that it'd be straightforward to convince people that at least making some effort to be nice to each other would be a mutually beneficial proposal, and look how well that's worked for us.

Linux has a bunch of different ways to reset an x86. Some of them are 32-bit only and so I'm just going to ignore them because honestly just what are you doing with your life. Also, they're horrible. So, that leaves us with five of them.


kbd - reboot via the keyboard controller. The original IBM PC had the CPU reset line tied to the keyboard controller. Writing the appropriate magic value pulses the line and the machine resets. This is all very straightforward, except for the fact that modern machines don't have keyboard controllers (they're actually part of the embedded controller) and even more modern machines don't even pretend to have a keyboard controller. Now, embedded controllers run software. And, as we all know, software is dreadful. But, worse, the software on the embedded controller has been written by BIOS authors. So clearly any pretence that this ever works is some kind of elaborate fiction. Some machines are very picky about hardware being in the exact state that Windows would program. Some machines work 9 times out of 10 and then lock up due to some odd timing issue. And others simply don't work at all. Hurrah!
triple - attempt to generate a triple fault. This is done by loading an empty interrupt descriptor table and then calling int(3). The interrupt fails (there's no IDT), the fault handler fails (there's no IDT) and the CPU enters a condition which should, in theory, then trigger a reset. Except there doesn't seem to be a requirement that this happen and it just doesn't work on a bunch of machines.
pci - not actually pci. Traditional PCI config space access is achieved by writing a 32 bit value to io port 0xcf8 to identify the bus, device, function and config register. Port 0xcfc then contains the register in question. But if you write the appropriate pair of magic values to 0xcf9, the machine will reboot. Spectacular! And not standardised in any way (certainly not part of the PCI spec), so different chipsets may have different requirements. Booo.
efi - EFI runtime services provide an entry point to reboot the machine. It usually even works! As long as EFI runtime services are working at all, which may be a stretch.
acpi - Recent versions of the ACPI spec let you provide an address (typically memory or system IO space) and a value to write there. The idea is that writing the value to the address resets the system. It turns out that doing so often fails. It's also impossible to represent the PCI reboot method via ACPI, because the PCI reboot method requires a pair of values and ACPI only gives you one.


Now, I'll admit that this all sounds pretty depressing. But people clearly sell computers with the expectation that they'll reboot correctly, so what's going on here?

A while back I did some tests with Windows running on top of qemu. This is a great way to evaluate OS behaviour, because you've got complete control of what's handed to the OS and what the OS tries to do to the hardware. And what I discovered was a little surprising. In the absence of an ACPI reboot vector, Windows will hit the keyboard controller, wait a while, hit it again and then give up. If an ACPI reboot vector is present, windows will poke it, try the keyboard controller, poke the ACPI vector again and try the keyboard controller one more time.

This turns out to be important. The first thing it means is that it generates two writes to the ACPI reboot vector. The second is that it leaves a gap between them while it's fiddling with the keyboard controller. And, shockingly, it turns out that on most systems the ACPI reboot vector points at 0xcf9 in system IO space. Even though most implementations nominally require two different values be written, it seems that this isn't a strict requirement and the ACPI method works.

3.0 will ship with this behaviour by default. It makes various machines work (some Apples, for instance), improves things on some others (some Thinkpads seem to sit around for extended periods of time otherwise) and hopefully avoids the need to add any more machine-specific quirks to the reboot code. There's still some divergence between us and Windows (mostly in how often we write to the keyboard controller), which can be cleaned up if it turns out to make a difference anywhere.

Now. Back to EFI bugs.


----------



## sygeek (Jun 1, 2011)

*Ten Oddities And Secrets About JavaScript​*_Visit link for full article_​


> JavaScript. At once bizarre and yet beautiful, it is surely the programming language that Pablo Picasso would have invented. Null is apparently an object, an empty array is apparently equal to false, and functions are bandied around as though they were tennis balls.
> 
> This article is aimed at intermediate developers who are curious about more advanced JavaScript. It is a collection of JavaScript’s oddities and well-kept secrets. Some sections will hopefully give you insight into how these curiosities can be useful to your code, while other sections are pure WTF material. So, let’s get started.
> 
> ...


----------



## sygeek (Jun 4, 2011)

*How I Failed, Failed, and Finally Succeeded at Learning How to Code*​By James Somers​


Spoiler



*cdn.theatlantic.com/static/mt/assets/science/assets_c/2011/06/SomersCode-Post-thumb-607x296-53026.jpg​
When Colin Hughes was about eleven years old his parents brought home a rather strange toy. It wasn't colorful or cartoonish; it didn't seem to have any lasers or wheels or flashing lights; the box it came in was decorated, not with the bust of a supervillain or gleaming protagonist, but bulleted text and a picture of a QWERTY keyboard. It called itself the "ORIC-1 Micro Computer." The package included two cassette tapes, a few cords and a 130-page programming manual.

On the whole it looked like a pretty crappy gift for a young boy. But his parents insisted he take it for a spin, not least because they had just bought the thing for more than Â£129. And so he did. And so, he says, "I was sucked into a hole from which I would never escape."

It's not hard to see why. Although this was 1983, and the ORIC-1 had about the same raw computing power as a modern alarm clock, there was something oddly compelling about it. When you turned it on all you saw was the word "Ready," and beneath that, a blinking cursor. It was an open invitation: type something, see what happens.

In less than an hour, the ORIC-1 manual took you from printing the word "hello" to writing short programs in BASIC -- the Beginner's All-Purpose Symbolic Instruction Code -- that played digital music and drew wildly interesting pictures on the screen. Just when you got the urge to try something more complicated, the manual showed you how.

In a way, the ORIC-1 was so mesmerizing because it stripped computing down to its most basic form: you typed some instructions; it did something cool. This was the computer's essential magic laid bare. Somehow ten or twenty lines of code became shapes and sounds; somehow the machine breathed life into a block of text.

No wonder Colin got hooked. The ORIC-1 wasn't really a toy, but a toy maker. All it asked for was a special kind of blueprint.

Once he learned the language, it wasn't long before he was writing his own simple computer games, and, soon after, teaching himself trigonometry, calculus and Newtonian mechanics to make them better. He learned how to model gravity, friction and viscosity. He learned how to make intelligent enemies.

More than all that, though, he learned how to teach. Without quite knowing it, Colin had absorbed from his early days with the ORIC-1 and other such microcomputers a sense for how the right mix of accessibility and complexity, of constraints and open-endedness, could take a student from total ignorance to near mastery quicker than anyone -- including his own teachers -- thought possible.

It was a sense that would come in handy, years later, when he gave birth to Project Euler, a peculiar website that has trained tens of thousands of new programmers, and that is in its own modest way the emblem of a nascent revolution in education.

*cdn.theatlantic.com/static/mt/assets/science/assets_c/2011/06/oric-1%20screenshot-thumb-615x497-52937.png
* * *​
Sometime between middle and high school, in the early 2000s, I got a hankering to write code. It was very much a "monkey see, monkey do" sort of impulse. I had been watching a lot of TechTV -- an obscure but much-loved cable channel focused on computing, gadgets, gaming and the Web -- and Hackers, the 1995 cult classic starring Angelina Jolie in which teenaged computer whizzes, accused of cybercrimes they didn't commit, have to hack their way to the truth.

I wanted in. So I did what you might expect an over-enthusiastic suburban nitwit to do, and asked my mom to drive me to the mall to buy Ivor Horton's 1,181-page, 4.6-pound Beginning Visual C++ 6. I imagined myself working montage-like through the book, smoothly accruing expertise one chapter at a time.

What happened instead is that I burned out after a week. The text itself was dense and unsmiling; the exercises were difficult. It was quite possibly the least fun I've ever had with a book, or, for that matter, with anything at all. I dropped it as quickly as I had picked it up.

Remarkably I went through this cycle several times: I saw people programming and thought it looked cool, resolved myself to learn, sought out a book and crashed the moment it got hard.

For a while I thought I didn't have the right kind of brain for programming. Maybe I needed to be better at math. Maybe I needed to be smarter.

But it turns out that the people trying to teach me were just doing a bad job. Those books that dragged me through a series of structured principles were just bad books. I should have ignored them. I should have just played.

Nobody misses that fact more egregiously than the American College Board, the folks responsible for setting the AP Computer Science high school curriculum. The AP curriculum ought to be a model for how to teach people to program. Instead it's an example of how something intrinsically amusing can be made into a lifeless slog.

*cdn.theatlantic.com/static/mt/assets/science/assets_c/2011/06/ap%20curriculum%20outline-thumb-615x651-52942.png​
I imagine that the College Board approached the problem from the top down. I imagine a group of people sat in a room somewhere and asked themselves, "What should students know by the time they finish this course?"; listed some concepts, vocabulary terms, snippets of code and provisional test questions; arranged them into "modules," swaths of exposition followed by exercises; then handed off the course, ready-made, to teachers who had no choice but to follow it to the letter.

Whatever the process, the product is a nightmare described eloquently by Paul Lockhart, a high school mathematics teacher, in his short booklet, A Mathematician's Lament, about the sorry state of high school mathematics. His argument applies almost beat for beat to computer programming.

Lockhart illustrates our system's sickness by imagining a fun problem, then showing how it might be gutted by educators trying to "cover" more "material."

Take a look at this picture:
*cdn.theatlantic.com/static/mt/assets/science/assets_c/2011/06/lockhart%27s%20triangle-thumb-300x158-52944.png​
It's sort of neat to wonder, How much of the box does the triangle take up? Two-thirds, maybe? Take a moment and try to figure it out.

If you're having trouble, it could be because you don't have much training in real math, that is, in solving open-ended problems about simple shapes and objects. It's hard work. But it's also kind of fun -- it requires patience, creativity, an insight here and there. It feels more like working on a puzzle than one of those tedious drills at the back of a textbook.

If you struggle for long enough you might strike upon the rather clever idea of chopping your rectangle into two pieces like so:

*cdn.theatlantic.com/static/mt/assets/science/assets_c/2011/06/lockhart%27s%20triangle%20with%20vertical-thumb-300x167-52946.png​
Now you have two rectangles, each cut diagonally in half by a leg of the triangle. So there is exactly as much space inside the triangle as outside, which means the triangle must take up exactly half the box!


> This is what a piece of mathematics looks and feels like. That little narrative is an example of the mathematician's art: asking simple and elegant questions about our imaginary creations, and crafting satisfying and beautiful explanations. There is really nothing else quite like this realm of pure idea; it's fascinating, it's fun, and it's free!


But this is not what math feels like in school. The creative process is inverted, vitiated:


> This is why it is so heartbreaking to see what is being done to mathematics in school. This rich and fascinating adventure of the imagination has been reduced to a sterile set of "facts" to be memorized and procedures to be followed. In place of a simple and natural question about shapes, and a creative and rewarding process of invention and discovery, students are treated to this:
> 
> *cdn.theatlantic.com/static/mt/assets/science/assets_c/2011/06/triangle%20area%20formula%20picture-thumb-300x85-52948.png​
> "The area of a triangle is equal to one-half its base times its height." Students are asked to memorize this formula and then "apply" it over and over in the "exercises." Gone is the thrill, the joy, even the pain and frustration of the creative act. There is not even a problem anymore. The question has been asked and answered at the same time -- there is nothing left for the student to do.


* * *​My struggle to become a hacker finally saw a breakthrough late in my freshman year of college, when I stumbled on a simple question:


> If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
> 
> Find the sum of all the multiples of 3 or 5 below 1000.


This was the puzzle that turned me into a programmer. This was Project Euler problem #1, written in 2001 by a then much older Colin Hughes, that student of the ORIC-1 who had gone on to become a math teacher at a small British grammar school and, not long after, the unseen professor to tens of thousands of fledglings like myself.

The problem itself is a lot like Lockhart's triangle question -- simple enough to entice the freshest beginner, sufficiently complicated to require some thought.

What's especially neat about it is that someone who has never programmed -- someone who doesn't even know what a program is -- can learn to write code that solves this problem in less than three hours. I've seen it happen. All it takes is a little hunger. You just have to want the answer.

That's the pedagological ballgame: get your student to want to find something out. All that's left after that is to make yourself available for hints and questions. "That student is taught the best who is told the least."

It's like sitting a kid down at the ORIC-1. Kids are naturally curious. They love blank slates: a sandbox, a bag of LEGOs. Once you show them a little of what the machine can do they'll clamor for more. They'll want to know how to make that circle a little smaller or how to make that song go a little faster. They'll imagine a game in their head and then relentlessly fight to build it.

Along the way, of course, they'll start to pick up all the concepts you wanted to teach them in the first place. And those concepts will stick because they learned them not in a vacuum, but in the service of a problem they were itching to solve.

Project Euler, named for the Swiss mathematician Leonhard Euler, is popular (more than 150,000 users have submitted 2,630,835 solutions) precisely because Colin Hughes -- and later, a team of eight or nine hand-picked helpers -- crafted problems that lots of people get the itch to solve. And it's an effective teacher because those problems are arranged like the programs in the ORIC-1's manual, in what Hughes calls an "inductive chain":

The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.

This is an idea that's long been familiar to video game designers, who know that players have the most fun when they're pushed always to the edge of their ability. The trick is to craft a ladder of increasingly difficult levels, each one building on the last. New skills are introduced with an easier version of a challenge -- a quick demonstration that's hard to screw up -- and certified with a harder version, the idea being to only let players move on when they've shown that they're ready. The result is a gradual ratcheting up the learning curve.

Project Euler is engaging in part because it's set up like a video game, with 340 fun, very carefully ordered problems. Each has its own page, like this one that asks you to discover the three most popular squares in a game of Monopoly played with 4-sided (instead of 6-sided) dice. At the bottom of the puzzle description is a box where you can enter your answer, usually just a whole number. The only "rule" is that the program you use to solve the problem should take no more than one minute of computer time to run.

On top of this there is one brilliant feature: once you get the right answer you're given access to a forum where successful solvers share their approaches. It's the ideal time to pick up new ideas -- after you've wrapped your head around a problem enough to solve it.

This is also why a lot of experienced programmers use Project Euler to learn a new language. Each problem's forum is a kind of Rosetta stone. For a single simple problem you might find annotated solutions in Python, C, Assembler, BASIC, Ruby, Java, J and FORTRAN.

Even if you're not a programmer, it's worth solving a Project Euler problem just to see what happens in these forums. What you'll find there is something that educators, technologists and journalists have been talking about for decades. And for nine years it's been quietly thriving on this site. It's the global, distributed classroom, a nurturing community of self-motivated learners -- old, young, from more than two hundred countries -- all sharing in the pleasure of finding things out.

* * *​
It's tempting to generalize: If programming is best learned in this playful, bottom-up way, why not everything else? Could there be a Project Euler for English or Biology?

Maybe. But I think it helps to recognize that programming is actually a very unusual activity. Two features in particular stick out.

The first is that it's naturally addictive. Computers are really fast; even in the '80s they were really fast. What that means is there is almost no time between changing your program and seeing the results. That short feedback loop is mentally very powerful. Every few minutes you get a little payoff -- perhaps a small hit of dopamine -- as you hack and tweak, hack and tweak, and see that your program is a little bit better, a little bit closer to what you had in mind.

It's important because learning is all about solving hard problems, and solving hard problems is all about not giving up. So a machine that triggers hours-long bouts of frantic obsessive excitement is a pretty nifty learning tool.

The second feature, by contrast, is something that at first glance looks totally immaterial. It's the simple fact that code is text.

Let's say that your sink is broken, maybe clogged, and you're feeling bold -- instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.

Unfortunately that's not going to happen. You can't just copy and paste a Bob Villa video to fix your garage door.

But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.

I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange -- text -- is the medium of action. Code is its own description. There's no translation involved in making it go.

Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.

* * *​
Garry Kasparov, a chess grandmaster who was famously bested by IBM's Deep Blue supercomputer, notes how machines have changed the way the game is learned:


> There have been many unintended consequences, both positive and negative, of the rapid proliferation of powerful chess software. Kids love computers and take to them naturally, so it's no surprise that the same is true of the combination of chess and computers. With the introduction of super-powerful software it became possible for a youngster to have a top- level opponent at home instead of needing a professional trainer from an early age. Countries with little by way of chess tradition and few available coaches can now produce prodigies.


A student can now download a free program that plays better than any living human. He can use it as a sparring partner, a coach, an encyclopedia of important games and openings, or a highly technical analyst of individual positions. He can become an expert without ever leaving the house.

Take that thought to its logical end. Imagine a future in which the best way to learn how to do something -- how to write prose, how to solve differential equations, how to fly a plane -- is to download software, not unlike today's chess engines, that takes you from zero to sixty by way of a delightfully addictive inductive chain.

If the idea sounds far-fetched, consider that I was taught to program by a program whose programmer, more than twenty-five years earlier, was taught to program by a program.


----------



## nisargshah95 (Jun 4, 2011)

SyGeek said:


> 0.1 + 0.2 !== 0.3


For those who want to know why it's like this, go to 14. Floating Point Arithmetic: Issues and Limitations &mdash; Python v2.7.1 documentation



SyGeek said:


> *How I Failed, Failed, and Finally Succeeded at Learning How to Code*​By James Somers​
> 
> 
> Spoiler
> ...



Great article buddy. Keep posting! I guess we should start a thread where we discuss Euler's problems  What say?


----------



## sygeek (Jun 4, 2011)

> Great article buddy. Keep posting! I guess we should start a thread where we discuss Euler's problems What say?


Sure, but no one looks interested in it and so I didn't bother creating one. Also, Euler's forums already have a section dedicated to this, so it doesn't make much sense unless you guys want a familiar community discussion to this.


----------



## nisargshah95 (Jun 4, 2011)

SyGeek said:


> Sure, but no one looks interested in it and so I didn't bother creating one. Also, Euler's forums already have a section dedicated to this, so it doesn't make much sense unless you guys want a familiar community discussion to this.


Oh. Anyways don't stop postin the articles. They're good.

BTW Yay! I solved the first problem - *If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000. Did it using JavaScript (and Python console for calculations).*


----------



## sygeek (Jun 4, 2011)

*Hate Java? You’re fighting the wrong battle.*

*Hate Java? You’re fighting the wrong battle*​


Spoiler



One of the most interesting trends I’ve seen lately is the unpopularity of Java around blogs, DZone and others. It seems some people are even offended, some even on a personal level, by suggesting the Java is superior in any way to their favorite web 2.0 language.

Java has been widely successful for a number of reasons:

It’s widely accepted in the established companies.
It’s one of the fastest languages.
It’s one of the most secure languages.
Synchronization primitives are built into the language.
It’s platform independent.
Hotspot is open source.
Thousands of vendors exist for a multitude of Java products.
Thousands of open source libraries exist for Java.
Community governance via that JCP (pre-Oracle).
This is quite a resume for any language, and it shows, as Java has enjoyed a long streak as being one of the most popular languages around.
So, why suddenly, in late 2010 and 2011, is Java suddenly the hated demon it is?
It’s popular to hate Java.

C-like syntax is no longer popular.
Hate for Oracle is being leveraged to promote individual interests.
People have been exposed to really bad code, that’s been written in Java.
… insert next hundred reasons here.
Java, the actual language and API, does have quite a few real problems… too many to list here (a mix of native and object types, an abundance of abandoned APIs, inconsistent use of checked exceptions). But I’m offering an olive branch… Lets discuss the real problem and not throw the baby out with the bath water.

So what is the real problem in the this industry? Java, with its faults, has completely conquered web application programming. On the sidelines, charging hard, new languages are being invented at a rate that is mind-blowing, to also conquer web application programming. The two are pitted together, and we’re left with what looks a bunch of preppy mall-kids battling for street territory by break dancing. And while everyone is bickering around whether PHP or Rails 3.1 runs faster and can serve more simultaneous requests, there lurks a silent elephant in the room, which is laughing quietly as we duke it out in childish arguments over syntax and runtimes.

Tell me, what do the following have in common?

Paying with a credit card.
Going to the emergency room.
Adjusting your 401k.
Using your insurance card at the dentist.
Shopping around for the best car insurance.
A BNSF train pulling a Union Pacific coal car.
Transferring money between banks.
Filling a prescription.
All the above industries are billion dollar players in our economy. All of the above industries write new COBOL and mainframe assembler programs. I’m not making this up, I work in the last industry, and I’ve interviewed and interned in the others.

For god sakes people, COBOL, invented in 1959, is still being written today, for real! We’re not talking maintaining a few lines here and there, we’re talking thousands of new lines, every day, to implement new functionality and new requirements. These industries haven’t even caught word the breeze has shifted to the cloud. These industries are essential; they form the building blocks of our economy. Despite this, they do not innovate and they carry massive expenses with their legacy technology. The costs of running business are enormous, and a good percentage of those are IT costs.

How expensive? Lets talk about mainframe licensing, for instance. Lets say you buy the Enterprise version of MongoDB and put in on a box. You then proceed to peg out the CPU doing transaction after transaction to the database… The next week, you go on vacation, and leave MongoDB running without doing a thing. How much did MongoDB cost in both weeks? The same.

Mainframes software is licensed much different. Lets say you buy your mainframe for a couple million and buy a database product for it. You then spend all week pegging the CPU(s) with database requests. You check your mail, and you now have a million dollar bill from the database vendor. Wait, I bought the hardware, why am I paying another bill? The software on a mainframe is often billed by usage, or how many CPU cycles you spend using it. If you spend 2,000,000 cpu cycles running the database, you will end up owing the vendor $2mil. Bizzare? Absolutely!

These invisible industries you utilize every day are full of bloat, legacy systems, and high costs. Java set out to conquer many fronts, and while it thoroughly took over the web application arena, it fizzled out in centralized computing. These industries are ripe for reducing costs and becoming more efficient, but honestly, we’re embarrassing ourselves. These industries stick with their legacy systems because they don’t think Ruby, Python, Scala, Lua, PHP, Java could possibly handle the ‘load’, scalability, or uptime requirements that their legacy systems provide. This is so far from the truth, but again, there has been 0 innovation in the arenas in the last 15 years, despite the progress of web technology making galaxy-sized leaps.

So next week someone will invent another DSL that makes Twitter easier to use, but your bank will be writing new COBOL to more efficiently transfer funds to another Bank. We’re embarrassing ourselves with our petty arguments. There is an entire economy that needs to see the benefits of distributed computing, but if the friendly fire continues, we’ll all lose. Lest stop these ridiculous arguments, pass the torch peacefully, and conquer some of these behemoths!


----------



## tejaslok (Jun 5, 2011)

thanks for posting this article "How I Failed, Failed, and Finally Succeeded at Learning How to Code". been going through it


----------



## sygeek (Jun 8, 2011)

*Why I Didn’t Get a Real Job​*_By Alex Schiff, University of Michigan_​


Spoiler



A month ago, I turned down a very good opportunity from a just-funded startup to continue my job for the rest of the summer. It was in an industry I was passionate about, I would have had a leadership position and having just received a raise, the pay would have been substantially higher than most jobs for 20-year-old college students. I had worked there for a year (full-time during last summer and part-time during the school year) and common sense should have pushed me to go back.

But I didn’t.

I’ve never been one to base my actions on others’ expectations. Just ask my dad, with whom I was having arguments about moral relativism by the time I was 13. That’s why I didn’t think twice about the implications of turning down an opportunity most people my age would kill for to start my own company. When you take a leap of faith of that magnitude, you can’t look back.

That’s not how the rest of the world sees it, though. As a college student, I’m expected to spend my summers either gaining experience in an internship or working at some job (no matter how menial) to earn money. Every April, the “So where are you working this summer?” conversation descends on the University of Michigan campus like a storm cloud. When I told people I was foregoing a paycheck for at least the next several months to build a startup, the reactions were a mix of confusion and misinformed assumptions that I couldn’t land a “real job.”

This sentiment surfaced recently with a conversation with a family member that asserted I needed to “pay my dues to society” by joining the workforce. And most adults I know tell me I need to get a real job first before starting my own company. One common thought is, “Most of the world has to wait until they’re at least 40 before they can even think about doing something like that. Why should you be any different?” It almost feels like people assume we have some sort of secular “original sin” that demands I work for someone else before I do what makes me happy. Even when I talk to peers who don’t understand entrepreneurship, their reaction can be subtle condescension and comments like, “Oh that’s cool, but you’re going to get a real job next summer or when you graduate, right?”

This is my real job. Building startups is what I want to do with my life, preferably as a founder. I’m really bad at working for other people. I have no deference to authority figures and have never been shy to voice my opinions, oftentimes to my detriment. I also can’t stand waiting on people that are in higher positions than me. It makes me feel like I should be in their place and really gets under my skin. All this makes me terrible at learning things from other people and taking advice. I need to learn by doing things and figuring out how to solve problems by myself. I’ll ask questions later.

As a first-time founder, I can’t escape admitting that starting fetchnotes is an immense learning experience. I’m under no illusion that I have any idea what I’m doing. I’m thankful I had a job where I learned a lot of core skills on the fly — recruiting, business development, management, a little sales and a lot about culture creation. But what I learned — and what most people learn in generalist, non-specialized jobs available to people our age — was the tip of the iceberg.

When you start something from scratch, you gain a much deeper understanding of these skills. Instead of being told, “We need Drupal developers. Go find Drupal developers here, here and here,” you need to brainstorm the best technical implementation of your idea, figure out what skills that requires and then figure out how to reach those people. Instead of being told, “Go reach out to these people for partnerships to do X, Y and Z,” you need to figure out what types of people and entities you’ll need to grow and how to convince them to do what you need them to do. When you’re an employee, you learn the “what”, when you’re a founder, you learn the “how” and “why.” You need to learn how to rally and motivate people and create a culture in a way that just isn’t remotely the same as a later-hired manager. There are at least 50 orders of magnitude in the difference between the strategic and innovative thinking required by a founder and that of even the most integral first employee.

Besides, put yourself in an employer’s shoes. You’re interviewing two college graduates — one who started a company and can clearly articulate why it succeeded or failed, and one who had an internship from a “brand name” institution. If I’m interviewing with someone who chooses the latter candidate, they’re not a place I want to work for. It’s likely a “do what we tell you because you’re our employee” working environment. And if that sounds like someone you want to work for, this article is probably irrelevant to you anyway.

That’s why I never understood the argument about needing to get a job or internship as a “learning experience” or to “pay your dues.” There’s no better learning experience than starting with nothing and figuring it out for yourself (or, thankfully for me, with a co-founder). And there’s no better time to start a company than as a student. When else will your bills, foregone wages and cost of failure be so low? If I fail right now, I’ll be out some money and some time. If I wait until I’m out of college, have a family to support and student loans to pay back, that cost could be being poor, hungry and homeless.

Okay, maybe that’s a little bit of hyperbole, but you get my point. If you have a game-changing idea, don’t make yourself wait because society says you need an internship every summer to get ahead. To quote a former boss, “just **** it out.”

_Alex Schiff is a co-founder of The New Student Union._


----------



## nisargshah95 (Jun 10, 2011)

Waiting for another article buddy...


----------



## sygeek (Jun 10, 2011)

*A brief Sony password analysis​*


Spoiler



So the Sony saga continues. As if the whole thing about 77 million breached PlayStation Network accounts wasn’t bad enough, numerous other security breaches in other Sony services have followed in the ensuing weeks, most recently with SonyPictures.com.

As bad guys often like to do, the culprits quickly stood up and put their handiwork on show. This time around it was a group going by the name of LulzSec. Here’s the interesting bit:


> Sony stored over 1,000,000 passwords of its customers in plaintext


Well actually, the really interesting bit is that they created a torrent of some of the breached accounts so that anyone could go and grab a copy. Ouch. Remember these are innocent customers’ usernames and passwords so we’re talking pretty serious data here. There’s no need to delve into everything Sony did wrong here, that’s both mostly obvious and not the objective of this post.

I thought it would be interesting to take a look at password practices from a real data source. I spend a bit of time writing about how people and software manage passwords and often talk about thing like entropy and reuse, but are these really discussion worthy topics? I mean do people generally get passwords right anyway and regularly use long, random, unique strings? We’ve got the data – let’s find out.

*What’s in the torrent*

The Sony Pictures torrent contains a number of text files with breached information and a few instructions:

*lh6.ggpht.com/-Bo5l9_fNC90/TexdzKmZqOI/AAAAAAAACYk/mShCiBSlEpk/image_thumb.png?imgmax=800​
The interesting bits are in the “Sony Pictures” folder and in particular, three files with a whole bunch of accounts in them:

*lh6.ggpht.com/-fM0aHTivJk8/Texd08reoFI/AAAAAAAACYs/IjfkQoqXjss/image_thumb111.png?imgmax=800​
After a little bit of cleansing, de-duping and an import into SQL Server for analysis, we end up with a total of 37,608 accounts. The LulzSec post earlier on did mention this was only a subset of the million they managed to obtain but it should be sufficient for our purposes here today.

*Analysis*

Here’s what I’m really interested in:

Length
Variety of character types
Randomness
Uniqueness
These are pretty well accepted measures for password entropy and the more you have of each, the better. Preferably heaps of all of them.

*Length*

Firstly there’s length; the accepted principle is that as length increases, as does entropy. Longer password = stronger password (all things else being equal). How long is long enough? Well, part of the problem is that there’s no consensus and you end up with all sorts of opinions on the subject. Considering the usability versus security balance, around eight characters plus is a pretty generally accepted yardstick. Let’s see the Sony breakdown:

*lh5.ggpht.com/-PNeDWGT6mHI/Texd2lbC4MI/AAAAAAAACY0/uSdy2Ma1Wqg/image_thumb11.png?imgmax=800​
We end up with 93% of accounts being between 6 and 10 characters long which is pretty predictable. Bang on 50% of these are less than eight characters. It’s interesting that seven character long passwords are a bit of an outlier – odd number discrimination, perhaps?

I ended up grouping the instances of 20 or more characters together – there are literally only a small handful of them. In fact there’s really only a handful from the teens onwards so what we’d consider is a relatively secure length really just doesn’t feature.

*Character types*

Length only gives us so much, what’s really important is the diversity within that length. Let’s take a look at character types and we’ll categorise them as follows:

Numbers
Uppercase
Lowercase
Everything else
Again, we’ve got this issue of usability and security to consider but good practice would normally be considered as having three or more character types. Let’s see what we’ve got:

*lh6.ggpht.com/-OvhoPbUJzxY/Texd4IMZ4bI/AAAAAAAACY8/Dr1WYT8PHa8/image_thumb12.png?imgmax=800​
Or put another way, _*only 4% of passwords had three or more character types*_. But it’s the spread of character types which is also interesting, particularly when only a single type is used:

*lh6.ggpht.com/-Q9XjOBI2YnY/Texd5sin7VI/AAAAAAAACZE/KytDoUohbO4/image_thumb6.png?imgmax=800​
In short, half of the passwords had only one character type and nine out of ten of those where all lowercase. But the really startling bit is the use of non-alphanumeric or characters:

*lh6.ggpht.com/-HxuKtkyaoN4/Texd7XUwdJI/AAAAAAAACZM/jExbkushio8/image_thumb8.png?imgmax=800​
Yep, less than _*1% of passwords contained a non-alphanumeric character*_. Interestingly, this also reconciles with the analysis done on the Gawker database a little while back.

*Randomness*

So how about randomness? Well, one way to look at this is how many of the passwords are identical. The top 25 were:
_
seinfeld, password, winner, 123456, purple, sweeps, contest, princess, maggie, 9452, peanut, shadow, ginger, michael, buster, sunshine, tigger, cookie, george, summer, taylor, bosco, abc123, ashley, bailey_

Many of the usual culprits are in there; “password”, “123456” and “abc123”. We saw all these back in the top 25 from the Gawker breach. We also see lots of passwords related to the fact this database was apparently related to a competition: “winner”, “sweeps” and “contest”. A few of these look very specific (9452, for example), but there may have been context to this in the signup process which lead multiple people to choose the same password.

However in the grand scheme of things, there weren’t a whole lot of instances of multiple people choosing the same password, in fact the 25 above boiled down to only 2.5%. Furthermore, 80% of passwords actually only occurred once so whilst poor password entropy is looking rampant, most people are making these poor choices independently and achieving different results.

Another way of assessing the randomness is to compare the passwords to a password dictionary. Now this doesn’t necessarily mean an English dictionary in the way we know it, rather it’s a collection of words which may be used as passwords so you’ll get things like obfuscated characters and letter / number combinations. I’ll use this one which has about 1.7 million entries. Let’s see how many of the Sony passwords are in there:

*lh6.ggpht.com/-3wchrCUz4GQ/Texd837O7qI/AAAAAAAACZU/Juu-vNq4JkY/image_thumb1.png?imgmax=800​
So more than one third of passwords conform to a relatively predictable pattern. That’s not to say they’re not long enough or don’t contain sufficient character types, in fact the passwords “1qazZAQ!” and “dallascowboys” were both matched so you’ve got four character types (even with a special character) and then a 13 character long password respectively. The thing is that they’re simply not random – they’ve obviously made appearances in password databases before.

*Uniqueness*

This is the one that gets really interesting as it asks the question “are people creating unique passwords across multiple accounts?” The thing about this latest Sony exploit is that it included data from multiple apparently independent locations within the organisation and as we saw earlier on, the dump LulzSec provided consists of several different data sources.

Of particular interest in those data sources are the “Beauty” and “Delboca” files as they contain almost all the accounts with a pretty even split between them. They also contain well over 2,000 accounts with the same email address, i.e. someone has registered on both databases.

So how rampant is password reuse between these two systems? Let’s take a look:

*lh6.ggpht.com/-MYktDCqCc7w/Texd-n8jdmI/AAAAAAAACZc/8rRAITVF8W8/image_thumb9.png?imgmax=800​
_*92% of passwords were reused across both systems*_. That’s a pretty damning indictment of the whole “unique password” mantra. Is the situation really this bad? Or are the figures skewed by folks perhaps thinking “Sony is Sony” and being a little relaxed with their reuse?

Let’s make it really interesting and compare accounts against Gawker. The internet being what it is there will always be the full Gawker database floating around out there and a quick Google search easily discovers live torrents. Gnosis (the group behind the Gawker breach) was a bit more generous than LulzSec and provided over 188,000 accounts for us to take a look at.

Although there were only 88 email addresses found in common with Sony (I had thought it might be a bit higher but then again, they’re pretty independent fields), the results are still very interesting:

*lh5.ggpht.com/-XzTtZsDqzXA/Texd_-fJyJI/AAAAAAAACZk/ZFTe3KtKrm0/image_thumb10.png?imgmax=800​
_*Two thirds of people with accounts at both Sony and Gawker reused their passwords*_. Now I’m not sure how much crossover there was timeframe wise in terms of when the Gawker accounts were created versus when the Sony ones were. It’s quite possible the Sony accounts came after the Gawker breach (remember this was six months ago now), and people got a little wise to the non-unique risk. But whichever way you look at it, there’s an awful lot of reuse going on here.

What really strikes me in this case is that between these two systems we have a couple of hundred thousand email addresses, usernames (the Gawker dump included these) and passwords. Based on the finding above, there’s a statistically good chance that the majority of them will work with other websites. How many Gmail or eBay or Facebook accounts are we holding the keys to here? And of course “we” is a bit misleading because anyone can grab these off the net right now. Scary stuff.

*Putting it in a exploit context
*
When an entire database is compromised and all the passwords are just sitting there in plain text, the only thing saving customers of the service is their password uniqueness. Forget about rainbow tables and brute force – we’ll come back to that – the one thing which stops the problem becoming any worse for them is that it’s the only place those credentials appear. Of course we know that both from the findings above and many other online examples, password reuse is the norm rather than the exception.

But what if the passwords in the database were hashed? Not even salted, just hashed? How vulnerable would the passwords have been to a garden variety rainbow attack? It’s pretty easy to get your hands on a rainbow table of hashed passwords containing between one and nine lowercase and numeric characters (RainbowCrack is a good place to start), so how many of the Sony passwords would easily fall?

*lh4.ggpht.com/-0VKd9U9h2qE/TexeBVOyngI/AAAAAAAACZs/cURhma9e-mQ/image_thumb11%25255B1%25255D.png?imgmax=800​
*82% of passwords would easily fall to a basic rainbow table attack*. Not good, but you can see why the rainbow table approach can be so effective, not so much because of its ability to make smart use of the time-memory trade-off scenario, but simply because it only needs to work against a narrow character set of very limited length to achieve a high success rate.

And if the passwords were salted before the hash is applied? Well, more than a third of the passwords were easily found in a common dictionary so it’s just a matter of having the compute power to brute force them and repeat the salt plus hash process. It may not be a trivial exercise, but there’s a very high probability of a significant portion of the passwords being exposed.

*Summary*

None of this is overly surprising, although it remains alarming. We know passwords are too short, too simple, too predictable and too much like the other ones the individual has created in other locations. The bit which did take me back a bit was the extent to which passwords conformed to very predictable patterns, namely only using alphanumeric character, being 10 characters or less and having a much better than average chance of being the same as other passwords the user has created on totally independent systems.

Sony has clearly screwed up big time here, no doubt. The usual process with these exploits is to berate the responsible organisation for only using MD5 or because they didn’t salt the password before hashing, but to not even attempt to obfuscate passwords and simply store them in the clear? Wow.

But the bigger story here, at least to my eye, is that users continue to apply lousy password practices. Sony’s breach is Sony’s fault, no doubt, but a whole bunch of people have made the situation far worse than it needs to be through reuse. Next week when another Sony database is exposed (it’s a pretty safe bet based on recent form), even if an attempt has been made to secure passwords, there’s a damn good chance a significant portion of them will be exposed anyway. And that is simply the fault of the end users.

Conclusion? Well, I’ll simply draw back to a previous post and say it again: The only secure password is the one you can’t remember.



_There are loads of pending articles ATM, I'll publish them whenever I'm free._​


----------



## Vyom (Jun 10, 2011)

*Why Time Flies*​


Spoiler



*cache.gawkerassets.com/assets/images/4/2011/02/clock.jpg​
Time flies when you're having fun. But you're at work, and work sucks. So how is it 5:00 already?

When we talk about "losing time," we aren't referring to that great night out, or that week of wonderful vacation, or the three-hour film that honestly didn't feel like more than an hour. No, when we fret about not having enough time, or wonder where exactly all those hours went, we're talking about mundane things. The workday. A lazy, unremarkable Sunday. Days when we gave time no apparent reason to fly, and it flew anyway.

Why does that happen? And where did all the time go? The secret lies in your brain's ticking clock—an elusive, inexact, and easily ignorable clock.

First of all, yes

In understanding any complex issue, especially a psychological one, intuition doesn't usually get us too far. As often as you can scrabble together a theory about how the mind works, a man in a lab coat will adjust his glasses, tilt forward his brow, and deliver a carefully intoned, "Actually..."

But not today. *Most of what you think you know about the perception of time is true.*

*Read More...*


----------



## sygeek (Jun 10, 2011)

^Nice post, though I've already read it a few months ago .


----------



## sygeek (Jun 12, 2011)

*The Internet Is My Religion*

*The Internet Is My Religion​*


Spoiler



Today, I was lucky enough to attend the second day of sessions at Personal Democracy Forum. I didn’t really know what I was getting myself into. As a social web / identity junkie, I was excited to see Vivek Kundra, Jay Rosen, Dan Gillmor, and Doc Searls. I hadn’t heard of many of the other presenters, including one whose talk would be the most inspiring I had ever seen on a live stage.

As Jim Gilliam took the stage, his slightly nervous, ever-so-geeky sensibility betrayed no signs of the passion, earnestness, and magnificence with which he would deliver what can only described as a modern epic: his life story.
*
Watch it now:*
*easycaptures.com/fs/uploaded/463/3982364861.jpg​
*
[Don't read on unless you have watched the video. The rest of this post probably won't make much sense.]*

Apologies for the long quote, but I find his closing words incredibly profound [my bolding]:


> As I was prepping for the surgery, I wasn’t thinking about Jesus, or whether my heart would start beating again after they stopped it, or whether I would go to heaven if it didn’t. *I was thinking about all of the people who had gotten me here. I owed every moment of my life to countless people I would never meet. Tomorrow, that interconnectedness would be represented in my own physical body – three different DNAs: individually they were useless, but together, they would equal one functioning human*. What an incredible debt to repay! I didn’t even know where to start.
> 
> And that’s when I truly found God. God is just what happens when humanity is connected. Humanity connected is God. There was no way I would ever repay this debt. It was only by the grace of God – your grace, that I would be saved. The truth is we all have this same cross to bear. We all owe every moment of our lives to countless people we will never meet. Whether it’s the soldiers who give us freedom because they fight for our country, or the surgeons who give us the cures that keep us alive. We all owe every moment of our lives to each other. We are all connected. We are all in debt to each other.
> 
> ...



The audience rose in a standing ovation, twice. A few of the reactions:


> You know it’s an amazing talk when everyone looks up from their computer and stops working to pay attention. #pdf11 – @katieharbath
> 
> Standing ovation for @jgilliam at #PDF11, not a dry eye in the house – @doctorow


As I walked back to the office from the Skirball Center this afternoon, I found myself thinking through what his message means to me, and why I was so moved by his words. Working at betaworks, I am confronted with and fascinated daily by the creative opportunities on the Web – for opportunities to change the way that we connect, communicate, share, learn, discover, live, and grow. Technology is only as good as the people who wield it, so perhaps I’m a bit idyllic and naive in my boundless optimism, but I am consistently awestruck at the power of the Web as a creative force.

I’m not a religious person, but I do believe there is something humbling about the act of creation – whether your form of creation is art, software, ideas, words, music – there is something about the act of creation that is worth striving for, worth sacrificing worth, worth living for. Regardless of your view of her politics, Ayn Rand spoke to this notion beautifully:


> “Whether it’s a symphony or a coal mine, all work is an act of creating and comes from the same source: from an inviolate capacity to see through one’s own eyes . . . which means: the capacity to see, to connect and to make what had not been seen, connected and made before.” _– Ch. II, The Utopia of Greed, Atlas Shrugged_



The Web – at its simplest, an open and generally accessible medium for two-way connectivity – bridges creative energy irrespective of geography, socioeconomic status, field of study, and language. It enables and even encourages the collision of ideas, problem statements, inspirations, and solutions. As Stephen Johnson offers in his fantastic book, Where Good Ideas Come From, “good ideas are not conjured out of thin air; they are built out of a collection of existing parts, the composition of which expands (and, occasionally, contracts) over time.” He might as well be describing the Web.

The Internet is a medium capable of unlocking and combining the creative energies of Earth’s seven billion in a way never before imaginable.  Through the near-infinite scale with which it powers human connectivity,  the Internet has shown in just a few short years its ability to enable anything from a collection of the world’s information, to a revolution, to, in the case of Jim Gilliam, life itself.

I’m so excited to be a small part of what can only be called a movement. I’m excited to build, I’m excited to change, and, perhaps most critically, I’m excited to defend.


----------



## sygeek (Jun 16, 2011)

*Write​*


Spoiler



Yesterday was my 49th birthday. By fortuitous circumstance, I spotted an item on Hacker News explaining that reputation on Stack Overflow seems to rise with age. I don’t have very much Stack Overflow reputation, but I do have a little Hacker News karma and over the years I’ve written a few articles that made it to the front page of reddit, programming.reddit.com, and Hacker News.

Somebody suggested that age was my secret for garnering reputation and writing well. I don’t think so. Here’s my secret, here’s what I think I do to get reputation, and what I think will may work for you:

Write.

That’s it. That’s everything. Just write. If you need more words, the secret to internet reputation is to write more. If you aren’t writing now, start writing. If you are writing now, write more.

Now some of you want more exposition, so for entertainment purposes only, I’ll explain why I think this is the case. But even if I’m wrong about why it’s the case, I’m sure I’m right that it is the case. So write.

Now here’s why I think writing more is the right strategy. The wrong strategy is to write less often but increase the quality.

This is a wrong strategy because it is based on a wrong assumption, namely that there’s a big tradeoff between quality and quantity. I agree that given more time, I can polish an essay. I can fix typos, tighten things up, clarify things. That’s very true, and if you are talking about the difference between one essay a day done well and three done poorly, I’ll buy that you are already writing enough if you write one a day, and you are better off getting the spelling right than to write two more unpolished essays.

But in quantities of less than one essay a day or one essay a week, the choice between writing more essays and writing higher quality essays is a false dichotomy. In fact, you may find that practice writing improves your writing, so writing more often leads to writing with higher quality. You also get nearly instantaneous feedback on the Internet, so the more you write, the more you learn about what works and what doesn’t work when you write.

Now that I’ve explained why I think writing less often is the wrong strategy, I will explain how writing for the Internet rewards writing more often. Writing on the Internet is nothing like writing on dead trees. For various legacy reasons, writing on dead trees involves writing books. The entire industry is built around long feedback cycles. It’s very expensive to get things wrong up front, so the process is optimized around doing it right the first time, with editors and proof-readers and what-not all conspiring to delay publishing your words where people can read them.

Worse, the feedback loop is appalling. What are you supposed to do with a bad review on Amazon.com? Incorporate it into the second edition of your masterpiece?

Speaking of masterpieces, that’s the other problem. Since books are what sell, if you want to write on dead trees, you have to write books. A book is a Big Thing, involving a lot of Planning. And structure. And organization. It demands a quality approach. Books are the “Big Design Up Front” poster children for writing.

Essays, rants, opinions… If writing book is Big Design Up Front, blogging and commenting is Cowboy Coding. A book is a Cathedral, a blog is a Bazaar. And in a good way! You get feedback faster. It’s the ultimate in Release Early, Release Often. You have an idea, you write it, you get feedback, you edit.

I am unapologetic about editing my comments and essays. Some criticize me for retracting my words when faced with a good argument. I say, **** You, this is not a debate, this is a process for generating and refining good ideas. I lie, of course, I have never said that. I actually say “Thank You!” Or I try. When I fail to be gracious in accepting criticism, that is my failing. The process of releasing ideas and refining them in the spotlight is one I value and think is a win for everyone.

Another problem with a book is that it’s One Big Thing. Very few book reviews say “Chapter two is a gem, buy the book for this and ignore chapter six, the author is confused.” Most just say “He’s an idiot, chapter six is proof of that.”

A blog is not One Big Thing. Many people say my blog is worth reading. They are probably wrong: I have had many popular essays. But for every “hit,” I have had an outrageous number of misses. If you read everything I wrote starting in 2004 to now, you’d be amazed I get any work in this industry. What people mean is, my good stuff is worth reading.

That’s the magic of the Internet. Thanks to Twitter and Hacker News and whatever else, if you write a good thing, it gets judged on its own. You can write 99 failures for every success, but you are judged by your best work, not your worst.

And let me tell you something about my Best Work: I often think I am writing something Important, something They’ll Remember Me For. And it sinks without a trace. A recent essay on the value of planning and the unimportance of plans comes to mind.

And then a day later I’ll dash off a rant based on a simple idea or insight, and the next thing I know it’s #1 on Hacker News. If I was writing a book, I’d do a terrible job, because my nose for what people want is broken. When I write essays, I don’t care, I write everything and I let Hacker News and Twitter sort out the wheat from my chaff.

If you have a good nose, a great instinct, maybe you can write less. But even if you don’t, you write more and you crowd-source the nose for you. And thanks to the fine granularity of essays and the willingness of the crowd to ignore you remises and celebrate your hits, your reputation grows inexorably whenever you sit down and simply write.

So write.

(discuss)

p.s. Here's an interesting counter-point.


----------



## sygeek (Jun 17, 2011)

*How I almost got a Criminal Record*​


Spoiler



Some April morning last year I received a letter from the local police department, bureau of criminal investigation. “Whoops”, I thought. What could have happened there? Had I forgot to pay for a speeding ticket? I opened the letter. It said I was the main suspect in a case of “data destruction” and I was supposed to visit the police department as soon as possible to file a testimony.

Wait. What is “data destruction”? Well, I had to translate it, but, I am from Austria where there is a paragraph (§126a, StGB) that basically says the following: If you modify, delete or destroy data that is not yours, you may get a prison sentence of six months or a fine. There are probably similar laws in other countries.

But how could I have done that? I wasn’t aware of any situation in which I could have deleted anyone’s data. I work as a sysadmin for a small consulting company, but it seemed implausible that they would charge me with the above mentioned.

*What I supposedly did wrong*

So I went to the police department. I was terrified because I had absolutely no idea what I had done wrong. The police officer however was very friendly and asked me to take a seat. He wanted to know if I knew a person X from Tyrol. Of course I didn’t. That was more than 500 kilometers away. Turns out, I hacked their Facebook profile.

Here’s the summary of what I was being charged with:

Creating a fake e-mail address impersonating as the victim
Using this e-mail address to hack into their Facebook account
Deleting all data from the Facebook profile and then changing the e-mail address and password
Deleting the fake e-mail address
All that had happened one Sunday evening. I recall being at home with my girlfriend, watching TV. I like to keep a detailed schedule in my calendar, therefore I knew. And I knew I was absolutely innocent. But how did they think it was me?

*How I became suspect*

Well, at that time I had an iPhone. I also had a mobile broadband contract with a major telephone company, let’s call them Company X. The police officer told me that upon investigation, they positively identified the IP address under which the e-mail address was created. It was the IP address assigned to my iPhone that evening.

That seemed impossible. There were several proofs supporting the fact that I could never have done this:

We have no 3G reception in our apartment.
The e-mail address was deleted five minutes after being created. Nobody is that quick on an iPhone.
The e-mail provider doesn’t offer the feature to register an address on their mobile sites.
You can’t change Facebook account details on their mobile interface as well. I know, I could have used the non-mobile site, but I wouldn’t have been that fast.

All that I told the police officer. He said he understood and jotted down some notes. They would contact me and I shouldn’t have to worry. At least he was on my side. But now I was there, main suspect in a case I never wanted to be in. The real offender was still out there.

What I did next? I called the telephone company.

*Contacting the Telco*

Just like most of the time when you call your ISP/Telco, they don’t really care what you have to say. I probably talked to ten different people. Chances are you have more knowledge about computers and how the internet works than they do. That’s why it didn’t surprise me that I was told things like:


“That’s absolutely impossible”
“If they say it’s your IP, you’re guilty!”
“Let me get a supervisor” (hung up after a minute of elevator music)
“I really don’t know what this is all about”

At that point I just gave up. I had already contacted a lawyer who would be prepared to go to court with me if necessary. As a student without proper insurance, it didn’t help that I had to pay him in advance just to get hold of the case files and take a look at them. I waited and waited, and then I got a phone call.

*How everything sorted itself out*

It was the legal department of the Telco. A lady was calling, and the first thing she did was to deeply apologize. She told me what had happened: Normally, when the prosecutor asks for the IP address and the corresponding owner, they have to fill out a form containing both information, which is then sent to the authorities. In my case they had gotten the IP address from the e-mail provider and the employee’s job was to match it against their records. The flaw could not be simpler: She had just swapped two digits in the IP address.

As a compensation they said I’d no longer have to pay the base fee – how generous! Luckily, they also accepted to pay my lawyer’s costs, whose invoice I just forwarded to them. I think they were just scared that I would take them to court for wrongfully delivering me.

A few weeks later the police officer contacted me. He also confirmed that the real offender was X’s ex-boyfriend, who probably just knew the password and wanted some payback.

*What we can learn from this*

One can clearly see from such an example is that there are still some holes in the security of current data retention policies. While governments have an understandable interest in storing communication data to allow effective criminal prosecution, the following should not be forgotten: No matter how perfect a system is, there is always the possibility of a weak implementation. Also, once the human factor comes into play, we can’t rely on the principles of an automated system anymore (even if it was flawless). To err is human, it seems. Luckily, I was forgiven in that case.

So, should you ever get into a situation where you are wrongfully suspected, make sure to let people know that there is a possibility of an error, even if they tell you otherwise.


----------



## nisargshah95 (Jun 18, 2011)

SyGeek said:


> *How I almost got a Criminal Record*​
> 
> 
> Spoiler
> ...


. This one's good.


----------



## Nipun (Jun 18, 2011)

SyGeek said:


> *How I almost got a Criminal Record*​



This is very good... umm... article. 


I have not read it till end. I will do it once I complete my HomeWork


----------



## sygeek (Jun 19, 2011)

*The Brain on Trial​*_By David Eagleman​_


Spoiler



*cdn.theatlantic.com/static/coma/images/issues/201107/neuroscience2.jpg​
_Advances in brain science are calling into question the volition behind many criminal acts. A leading neuroscientist describes how the foundations of our criminal-justice system are beginning to crumble, and proposes a new way forward for law and order._ 

On the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The 25-year-old climbed the stairs to the observation deck, lugging with him a footlocker full of guns and ammunition. At the top, he killed a receptionist with the butt of his rifle. Two families of tourists came up the stairwell; he shot at them at point-blank range. Then he began to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As her boyfriend knelt to help her, Whitman shot him as well. He shot pedestrians in the street and an ambulance driver who came to rescue them.

The evening before, Whitman had sat at his typewriter and composed a suicide note: 


> I don’t really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I can’t recall when it started) I have been a victim of many unusual and irrational thoughts.


By the time the police shot him dead, Whitman had killed 13 people and wounded 32 more. The story of his rampage dominated national headlines the next day. And when police went to investigate his home for clues, the story became even stranger: in the early hours of the morning on the day of the shooting, he had murdered his mother and stabbed his wife to death in her sleep. 


> It was after much thought that I decided to kill my wife, Kathy, tonight … I love her dearly, and she has been as fine a wife to me as any man could ever hope to have. I cannot rationa[l]ly pinpoint any specific reason for doing this …


Along with the shock of the murders lay another, more hidden, surprise: the juxtaposition of his aberrant actions with his unremarkable personal life. Whitman was an Eagle Scout and a former marine, studied architectural engineering at the University of Texas, and briefly worked as a bank teller and volunteered as a scoutmaster for Austin’s Boy Scout Troop 5. As a child, he’d scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. So after his shooting spree from the University of Texas Tower, everyone wanted answers.

For that matter, so did Whitman. He requested in his suicide note that an autopsy be performed to determine if something had changed in his brain—because he suspected it had. 


> I talked with a Doctor once for about two hours and tried to convey to him my fears that I felt [overcome by] overwhelming violent impulses. After one session I never saw the Doctor again, and since then I have been fighting my mental turmoil alone, and seemingly to no avail.


Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction. Female monkeys with amygdala damage often neglected or physically abused their infants. In humans, activity in the amygdala increases when people are shown threatening faces, are put into frightening situations, or experience social phobias. Whitman’s intuition about himself—that something in his brain was changing his behavior—was spot-on.

Stories like Whitman’s are not uncommon: legal cases involving brain damage crop up increasingly often. As we develop better technologies for probing the brain, we detect more problems, and link them more easily to aberrant behavior. Take the 2000 case of a 40-year-old man we’ll call Alex, whose sexual preferences suddenly began to transform. He developed an interest in child pornography—and not just a little interest, but an overwhelming one. He poured his time into child-pornography Web sites and magazines. He also solicited prostitution at a massage parlor, something he said he had never previously done. He reported later that he’d wanted to stop, but “the pleasure principle overrode” his restraint. He worked to hide his acts, but subtle sexual advances toward his prepubescent stepdaughter alarmed his wife, who soon discovered his collection of child pornography. He was removed from his house, found guilty of child molestation, and sentenced to rehabilitation in lieu of prison. In the rehabilitation program, he made inappropriate sexual advances toward the staff and other clients, and was expelled and routed toward prison.

At the same time, Alex was complaining of worsening headaches. The night before he was to report for prison sentencing, he couldn’t stand the pain anymore, and took himself to the emergency room. He underwent a brain scan, which revealed a massive tumor in his orbitofrontal cortex. Neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

The year after the brain surgery, his pedophilic behavior began to return. The neuroradiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior again returned to normal.

When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

Alex’s sudden pedophilia illustrates that hidden drives and desires can lurk undetected behind the neural machinery of socialization. When the frontal lobes are compromised, people become disinhibited, and startling behaviors can emerge. Disinhibition is commonly seen in patients with frontotemporal dementia, a tragic disease in which the frontal and temporal lobes degenerate. With the loss of that brain tissue, patients lose the ability to control their hidden impulses. To the frustration of their loved ones, these patients violate social norms in endless ways: shoplifting in front of store managers, removing their clothes in public, running stop signs, breaking out in song at inappropriate times, eating food scraps found in public trash cans, being physically aggressive or sexually transgressive. Patients with frontotemporal dementia commonly end up in courtrooms, where their lawyers, doctors, and embarrassed adult children must explain to the judge that the violation was not the perpetrator’s fault, exactly: much of the brain has degenerated, and medicine offers no remedy. Fifty-seven percent of frontotemporal-dementia patients violate social norms, as compared with only 27 percent of Alzheimer’s patients.

Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos. Some patients became consumed with Internet poker, racking up unpayable credit-card bills. For several, the new addiction reached beyond gambling, to compulsive eating, excessive alcohol consumption, and hypersexuality.

What was going on? Parkinson’s involves the loss of brain cells that produce a neurotransmitter known as dopamine. Pramipexole works by impersonating dopamine. But it turns out that dopamine is a chemical doing double duty in the brain. Along with its role in motor commands, it also mediates the reward systems, guiding a person toward food, drink, mates, and other things useful for survival. Because of dopamine’s role in weighing the costs and benefits of decisions, imbalances in its levels can trigger gambling, overeating, and drug addiction—behaviors that result from a reward system gone awry. Physicians now watch for these behavioral changes as a possible side effect of drugs like pramipexole. Luckily, the negative effects of the drug are reversible—the physician simply lowers the dosage, and the compulsive gambling goes away.

The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

I submit that this is the wrong question to be asking. The choices we make are inseparably yoked to our neural circuitry, and therefore we have no meaningful way to tease the two apart. The more we learn, the more the seemingly simple concept of blameworthiness becomes complicated, and the more the foundations of our legal system are strained.

If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. Discoveries in neuroscience suggest a new way forward for law and order—one that will lead to a more cost-effective, humane, and flexible system than the one we have today. When modern brain science is laid out clearly, it is difficult to justify how our legal system can continue to function without taking what we’ve learned into account.

Many of us like to believe that all adults possess the same capacity to make sound choices. It’s a charitable idea, but demonstrably wrong. People’s brains are vastly different.

Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

And this feeds into a larger lesson of biology: we are not the ones steering the boat of our behavior, at least not nearly as much as we believe. Who we are runs well below the surface of our conscious access, and the details reach back in time to before our birth, when the meeting of a sperm and an egg granted us certain attributes and not others. Who we can be starts with our molecular blueprints—a series of alien codes written in invisibly small strings of acids—well before we have anything to do with it. Each of us is, in part, a product of our inaccessible, microscopic history. By the way, as regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

Genes are part of the story, but they’re not the whole story. We are likewise influenced by the environments in which we grow up. Substance abuse by a mother during pregnancy, maternal stress, and low birth weight all can influence how a baby will turn out as an adult. As a child grows, neglect, physical abuse, and head injury can impede mental development, as can the physical environment. (For example, the major public-health movement to eliminate lead-based paint grew out of an understanding that ingesting lead can cause brain damage, making children less intelligent and, in some cases, more impulsive and aggressive.) And every experience throughout our lives can modify genetic expression—activating certain genes or switching others off—which in turn can inaugurate new behaviors. In this way, genes and environments intertwine.

When it comes to nature and nurture, the important point is that we choose neither one. We are each constructed from a genetic blueprint, and then born into a world of circumstances that we cannot control in our most-formative years. The complex interactions of genes and environment mean that all citizens—equal before the law—possess different perspectives, dissimilar personalities, and varied capacities for decision-making. The unique patterns of neurobiology inside each of our heads cannot qualify as choices; these are the cards we’re dealt.

Because we did not choose the factors that affected the formation and structure of our brain, the concepts of free will and personal responsibility begin to sprout question marks. Is it meaningful to say that Alex made bad choices, even though his brain tumor was not his fault? Is it justifiable to say that the patients with frontotemporal dementia or Parkinson’s should be punished for their bad behavior?

It is problematic to imagine yourself in the shoes of someone breaking the law and conclude, “Well, I wouldn’t have done that”—because if you weren’t exposed to in utero cocaine, lead poisoning, and physical abuse, and he was, then you and he are not directly comparable. You cannot walk a mile in his shoes.

The legal system rests on the assumption that we are “practical reasoners,” a term of art that presumes, at bottom, the existence of free will. The idea is that we use conscious deliberation when deciding how to act—that is, in the absence of external duress, we make free decisions. This concept of the practical reasoner is intuitive but problematic.

The existence of free will in human behavior is the subject of an ancient debate. Arguments in support of free will are typically based on direct subjective experience (“I feel like I made the decision to lift my finger just now”). But evaluating free will requires some nuance beyond our immediate intuitions.

Consider a decision to move or speak. It feels as though free will leads you to stick out your tongue, or scrunch up your face, or call someone a name. But free will is not required to play any role in these acts. People with Tourette’s syndrome, for instance, suffer from involuntary movements and vocalizations. A typical Touretter may stick out his tongue, scrunch up his face, or call someone a name—all without choosing to do so.

We immediately learn two things from the Tourette’s patient. First, actions can occur in the absence of free will. Second, the Tourette’s patient has no free won’t. He cannot use free will to override or control what subconscious parts of his brain have decided to do. What the lack of free will and the lack of free won’t have in common is the lack of “free.” Tourette’s syndrome provides a case in which the underlying neural machinery does its thing, and we all agree that the person is not responsible.

This same phenomenon arises in people with a condition known as chorea, for whom actions of the hands, arms, legs, and face are involuntary, even though they certainly look voluntary: ask such a patient why she is moving her fingers up and down, and she will explain that she has no control over her hand. She cannot not do it. Similarly, some split-brain patients (who have had the two hemispheres of the brain surgically disconnected) develop alien-hand syndrome: while one hand buttons up a shirt, the other hand works to unbutton it. When one hand reaches for a pencil, the other bats it away. No matter how hard the patient tries, he cannot make his alien hand not do what it’s doing. The movements are not “his” to freely start or stop.

Unconscious acts are not limited to unintended shouts or wayward hands; they can be surprisingly sophisticated. Consider Kenneth Parks, a 23-year-old Canadian with a wife, a five-month-old daughter, and a close relationship with his in-laws (his mother-in-law described him as a “gentle giant”). Suffering from financial difficulties, marital problems, and a gambling addiction, he made plans to go see his in-laws to talk about his troubles.

In the wee hours of May 23, 1987, Kenneth arose from the couch on which he had fallen asleep, but he did not awaken. Sleepwalking, he climbed into his car and drove the 14 miles to his in-laws’ home. He broke in, stabbed his mother-in-law to death, and assaulted his father-in-law, who survived. Afterward, he drove himself to the police station. Once there, he said, “I think I have killed some people … My hands,” realizing for the first time that his own hands were severely cut.

Over the next year, Kenneth’s testimony was remarkably consistent, even in the face of attempts to lead him astray: he remembered nothing of the incident. Moreover, while all parties agreed that Kenneth had undoubtedly committed the murder, they also agreed that he had no motive. His defense attorneys argued that this was a case of killing while sleepwalking, known as homicidal somnambulism.

Although critics cried “Faker!,” sleepwalking is a verifiable phenomenon. On May 25, 1988, after lengthy consideration of electrical recordings from Kenneth’s brain, the jury concluded that his actions had indeed been involuntary, and declared him not guilty.

As with Tourette’s sufferers, split-brain patients, and those with choreic movements, Kenneth’s case illustrates that high-level behaviors can take place in the absence of free will. Like your heartbeat, breathing, blinking, and swallowing, even your mental machinery can run on autopilot. The crux of the question is whether all of your actions are fundamentally on autopilot or whether some little bit of you is “free” to choose, independent of the rules of biology.

This has always been the sticking point for philosophers and scientists alike. After all, there is no spot in the brain that is not densely interconnected with—and driven by—other brain parts. And that suggests that no part is independent and therefore “free.” In modern science, it is difficult to find the gap into which to slip free will—the uncaused causer—because there seems to be no part of the machinery that does not follow in a causal relationship from the other parts.

Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment. In fact, free will may end up being so small that we eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease.

The study of brains and behaviors is in the midst of a conceptual shift. Historically, clinicians and lawyers have agreed on an intuitive distinction between neurological disorders (“brain problems”) and psychiatric disorders (“mind problems”). As recently as a century ago, a common approach was to get psychiatric patients to “toughen up,” through deprivation, pleading, or torture. Not surprisingly, this approach was medically fruitless. After all, while psychiatric disorders tend to be the product of more-subtle forms of brain pathology, they, too, are based in the biological details of the brain.

What accounts for the shift from blame to biology? Perhaps the largest driving force is the effectiveness of pharmaceutical treatments. No amount of threatening will chase away depression, but a little pill called fluoxetine often does the trick. Schizophrenic symptoms cannot be overcome by exorcism, but they can be controlled by risperidone. Mania responds not to talk or to ostracism, but to lithium. These successes, most of them introduced in the past 60 years, have underscored the idea that calling some disorders “brain problems” while consigning others to the ineffable realm of “the psychic” does not make sense. Instead, we have begun to approach mental problems in the same way we might approach a broken leg. The neuroscientist Robert Sapolsky invites us to contemplate this conceptual shift with a series of questions: 


> Is a loved one, sunk in a depression so severe that she cannot function, a case of a disease whose biochemical basis is as “real” as is the biochemistry of, say, diabetes, or is she merely indulging herself? Is a child doing poorly at school because he is unmotivated and slow, or because there is a neurobiologically based learning disability? Is a friend, edging towards a serious problem with substance abuse, displaying a simple lack of discipline, or suffering from problems with the neurochemistry of reward?


Acts cannot be understood separately from the biology of the actors—and this recognition has legal implications. Tom Bingham, Britain’s former senior law lord, once put it this way: 


> In the past, the law has tended to base its approach … on a series of rather crude working assumptions: adults of competent mental capacity are free to choose whether they will act in one way or another; they are presumed to act rationally, and in what they conceive to be their own best interests; they are credited with such foresight of the consequences of their actions as reasonable people in their position could ordinarily be expected to have; they are generally taken to mean what they say.
> 
> Whatever the merits or demerits of working assumptions such as these in the ordinary range of cases, it is evident that they do not provide a uniformly accurate guide to human behaviour.


The more we discover about the circuitry of the brain, the more we tip away from accusations of indulgence, lack of motivation, and poor discipline—and toward the details of biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are steered by deeply embedded neural programs.

Imagine a spectrum of culpability. On one end, we find people like Alex the pedophile, or a patient with frontotemporal dementia who exposes himself in public. In the eyes of the judge and jury, these are people who suffered brain damage at the hands of fate and did not choose their neural situation. On the other end of the spectrum—the blameworthy side of the “fault” line—we find the common criminal, whose brain receives little study, and about whom our current technology might be able to say little anyway. The overwhelming majority of lawbreakers are on this side of the line, because they don’t have any obvious, measurable biological problems. They are simply thought of as freely choosing actors.

Such a spectrum captures the common intuition that juries hold regarding blameworthiness. But there is a deep problem with this intuition. Technology will continue to improve, and as we grow better at measuring problems in the brain, the fault line will drift into the territory of people we currently hold fully accountable for their crimes. Problems that are now opaque will open up to examination by new techniques, and we may someday find that many types of bad behavior have a basic biological explanation—as has happened with schizophrenia, epilepsy, depression, and mania.

Today, neuroimaging is a crude technology, unable to explain the details of individual behavior. We can detect only large-scale problems, but within the coming decades, we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems. Neuroscience will be better able to say why people are predisposed to act the way they do. As we become more skilled at specifying how behavior results from the microscopic details of the brain, more defense lawyers will point to biological mitigators of guilt, and more juries will place defendants on the not-blameworthy side of the line.

This puts us in a strange situation. After all, a just legal system cannot define culpability simply by the limitations of current technology. Expert medical testimony generally reflects only whether we yet have names and measurements for a problem, not whether a problem exists. A legal system that declares a person culpable at the beginning of a decade and not culpable at the end is one in which culpability carries no clear meaning.

The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.

Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?

The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals; we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go; the judge must keep society safe.

Those who break social contracts need to be confined, but in this framework, the future is more important than the past. Deeper biological insight into behavior will foster a better understanding of recidivism—and this offers a basis for empirically based sentencing. Some people will need to be taken off the streets for a longer time (even a lifetime), because their likelihood of reoffense is high; others, because of differences in neural constitution, are less likely to recidivate, and so can be released sooner.

The law is already forward-looking in some respects: consider the leniency afforded a crime of passion versus a premeditated murder. Those who commit the former are less likely to recidivate than those who commit the latter, and their sentences sensibly reflect that. Likewise, American law draws a bright line between criminal acts committed by minors and those by adults, punishing the latter more harshly. This approach may be crude, but the intuition behind it is sound: adolescents command lesser skills in decision-making and impulse control than do adults; a teenager’s brain is simply not like an adult’s brain. Lighter sentences are appropriate for those whose impulse control is likely to improve naturally as adolescence gives way to adulthood.

Taking a more scientific approach to sentencing, case by case, could move us beyond these limited examples. For instance, important changes are happening in the sentencing of sex offenders. In the past, researchers have asked psychiatrists and parole-board members how likely specific sex offenders were to relapse when let out of prison. Both groups had experience with sex offenders, so predicting who was going straight and who was coming back seemed simple. But surprisingly, the expert guesses showed almost no correlation with the actual outcomes. The psychiatrists and parole-board members had only slightly better predictive accuracy than coin-flippers. This astounded the legal community.

So researchers tried a more actuarial approach. They set about recording dozens of characteristics of some 23,000 released sex offenders: whether the offender had unstable employment, had been sexually abused as a child, was addicted to drugs, showed remorse, had deviant sexual interests, and so on. Researchers then tracked the offenders for an average of five years after release to see who wound up back in prison. At the end of the study, they computed which factors best explained the reoffense rates, and from these and later data they were able to build actuarial tables to be used in sentencing.

Which factors mattered? Take, for instance, low remorse, denial of the crime, and sexual abuse as a child. You might guess that these factors would correlate with sex offenders’ recidivism. But you would be wrong: those factors offer no predictive power. How about antisocial personality disorder and failure to complete treatment? These offer somewhat more predictive power. But among the strongest predictors of recidivism are prior sexual offenses and sexual interest in children. When you compare the predictive power of the actuarial approach with that of the parole boards and psychiatrists, there is no contest: numbers beat intuition. In courtrooms across the nation, these actuarial tests are now used in presentencing to modulate the length of prison terms.

We will never know with certainty what someone will do upon release from prison, because real life is complicated. But greater predictive power is hidden in the numbers than people generally expect. Statistically based sentencing is imperfect, but it nonetheless allows evidence to trump folk intuition, and it offers customization in place of the blunt guidelines that the legal system typically employs. The current actuarial approaches do not require a deep understanding of genes or brain chemistry, but as we introduce more science into these measures—for example, with neuroimaging studies—the predictive power will only improve. (To make such a system immune to government abuse, the data and equations that compose the sentencing guidelines must be transparent and available online for anyone to verify.)

Beyond customized sentencing, a forward-thinking legal system informed by scientific insights into the brain will enable us to stop treating prison as a one-size-fits-all solution. To be clear, I’m not opposed to incarceration, and its purpose is not limited to the removal of dangerous people from the streets. The prospect of incarceration deters many crimes, and time actually spent in prison can steer some people away from further criminal acts upon their release. But that works only for those whose brains function normally. The problem is that prisons have become our de facto mental-health-care institutions—and inflicting punishment on the mentally ill usually has little influence on their future behavior. An encouraging trend is the establishment of mental-health courts around the nation: through such courts, people with mental illnesses can be helped while confined in a tailored environment. Cities such as Richmond, Virginia, are moving in this direction, for reasons of justice as well as cost-effectiveness. Sheriff C. T. Woody, who estimates that nearly 20 percent of Richmond’s prisoners are mentally ill, told CBS News, “The jail isn’t a place for them. They should be in a mental-health facility.” Similarly, many jurisdictions are opening drug courts and developing alternative sentences; they have realized that prisons are not as useful for solving addictions as are meaningful drug-rehabilitation programs.

A forward-thinking legal system will also parlay biological understanding into customized rehabilitation, viewing criminal behavior the way we understand other medical conditions such as epilepsy, schizophrenia, and depression—conditions that now allow the seeking and giving of help. These and other brain disorders find themselves on the not-blameworthy side of the fault line, where they are now recognized as biological, not demonic, issues.

Many people recognize the long-term cost-effectiveness of rehabilitating offenders instead of packing them into overcrowded prisons. The challenge has been the dearth of new ideas about how to rehabilitate them. A better understanding of the brain offers new ideas. For example, poor impulse control is characteristic of many prisoners. These people generally can express the difference between right and wrong actions, and they understand the disadvantages of punishment—but they are handicapped by poor control of their impulses. Whether as a result of anger or temptation, their actions override reasoned consideration of the future.

If it seems difficult to empathize with people who have poor impulse control, just think of all the things you succumb to against your better judgment. Alcohol? Chocolate cake? Television? It’s not that we don’t know what’s best for us, it’s simply that the frontal-lobe circuits representing long-term considerations can’t always win against short-term desire when temptation is in front of us.

With this understanding in mind, we can modify the justice system in several ways. One approach, advocated by Mark A. R. Kleiman, a professor of public policy at UCLA, is to ramp up the certainty and swiftness of punishment—for instance, by requiring drug offenders to undergo twice-weekly drug testing, with automatic, immediate consequences for failure—thereby not relying on distant abstraction alone. Similarly, economists have suggested that the drop in crime since the early 1990s has been due, in part, to the increased presence of police on the streets: their visibility shores up support for the parts of the brain that weigh long-term consequences.

We may be on the cusp of finding new rehabilitative strategies as well, affording people better control of their behavior, even in the absence of external authority. To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs. My colleagues and I are proposing a new approach, one that grows from the understanding that the brain operates like a team of rivals, with different neural populations competing to control the single output channel of behavior. Because it’s a competition, the outcome can be tipped. I call the approach “the prefrontal workout.”

The basic idea is to give the frontal lobes practice in squelching the short-term brain circuits. To this end, my colleagues Stephen LaConte and Pearl Chiu have begun providing real-time feedback to people during brain scanning. Imagine that you’d like to quit smoking cigarettes. In this experiment, you look at pictures of cigarettes during brain imaging, and the experimenters measure which regions of your brain are involved in the craving. Then they show you the activity in those networks, represented by a vertical bar on a computer screen, while you look at more cigarette pictures. The bar acts as a thermometer for your craving: if your craving networks are revving high, the bar is high; if you’re suppressing your craving, the bar is low. Your job is to make the bar go down. Perhaps you have insight into what you’re doing to resist the craving; perhaps the mechanism is inaccessible. In any case, you try out different mental avenues until the bar begins to slowly sink. When it goes all the way down, that means you’ve successfully recruited frontal circuitry to squelch the activity in the networks involved in impulsive craving. The goal is for the long term to trump the short term. Still looking at pictures of cigarettes, you practice making the bar go down over and over, until you’ve strengthened those frontal circuits. By this method, you’re able to visualize the activity in the parts of your brain that need modulation, and you can witness the effects of different mental approaches you might take.

If this sounds like biofeedback from the 1970s, it is—but this time with vastly more sophistication, monitoring specific networks inside the head rather than a single electrode on the skin. This research is just beginning, so the method’s efficacy is not yet known—but if it works well, it will be a game changer. We will be able to take it to the incarcerated population, especially those approaching release, to try to help them avoid coming back through the revolving prison doors.

This prefrontal workout is designed to better balance the debate between the long- and short-term parties of the brain, giving the option of reflection before action to those who lack it. And really, that’s all maturation is. The main difference between teenage and adult brains is the development of the frontal lobes. The human prefrontal cortex does not fully develop until the early 20s, and this fact underlies the impulsive behavior of teenagers. The frontal lobes are sometimes called the organ of socialization, because becoming socialized largely involves developing the circuitry to squelch our first impulses.

This explains why damage to the frontal lobes unmasks unsocialized behavior that we would never have thought was hidden inside us. Recall the patients with frontotemporal dementia who shoplift, expose themselves, and burst into song at inappropriate times. The networks for those behaviors have been lurking under the surface all along, but they’ve been masked by normally functioning frontal lobes. The same sort of unmasking happens in people who go out and get rip-roaring drunk on a Saturday night: they’re disinhibiting normal frontal-lobe function and letting more-impulsive networks climb onto the main stage. After training at the prefrontal gym, a person might still crave a cigarette, but he’ll know how to beat the craving instead of letting it win. It’s not that we don’t want to enjoy our impulsive thoughts (Mmm, cake), it’s merely that we want to endow the frontal cortex with some control over whether we act upon them (I’ll pass). Similarly, if a person thinks about committing a criminal act, that’s permissible as long as he doesn’t take action.

For the pedophile, we cannot hope to control whether he is attracted to children. That he never acts on the attraction may be the best we can hope for, especially as a society that respects individual rights and freedom of thought. Social policy can hope only to prevent impulsive thoughts from tipping into behavior without reflection. The goal is to give more control to the neural populations that care about long-term consequences—to inhibit impulsivity, to encourage reflection. If a person thinks about long-term consequences and still decides to move forward with an illegal act, then we’ll respond accordingly. The prefrontal workout leaves the brain intact—no drugs or surgery—and uses the natural mechanisms of brain plasticity to help the brain help itself. It’s a tune-up rather than a product recall.

We have hope that this approach represents the correct model: it is grounded simultaneously in biology and in libertarian ethics, allowing a person to help himself by improving his long-term decision-making. Like any scientific attempt, it could fail for any number of unforeseen reasons. But at least we have reached a point where we can develop new ideas rather than assuming that repeated incarceration is the single practical solution for deterring crime.

Along any axis that we use to measure human beings, we discover a wide-ranging distribution, whether in empathy, intelligence, impulse control, or aggression. People are not created equal. Although this variability is often imagined to be best swept under the rug, it is in fact the engine of evolution. In each generation, nature tries out as many varieties as it can produce, along all available dimensions.

Variation gives rise to lushly diverse societies—but it serves as a source of trouble for the legal system, which is largely built on the premise that humans are all equal before the law. This myth of human equality suggests that people are equally capable of controlling impulses, making decisions, and comprehending consequences. While admirable in spirit, the notion of neural equality is simply not true.

As brain science improves, we will better understand that people exist along continua of capabilities, rather than in simplistic categories. And we will be better able to tailor sentencing and rehabilitation for the individual, rather than maintain the pretense that all brains respond identically to complex challenges and that all people therefore deserve the same punishments. Some people wonder whether it’s unfair to take a scientific approach to sentencing—after all, where’s the humanity in that? But what’s the alternative? As it stands now, ugly people receive longer sentences than attractive people; psychiatrists have no capacity to guess which sex offenders will reoffend; and our prisons are overcrowded with drug addicts and the mentally ill, both of whom could be better helped by rehabilitation. So is current sentencing really superior to a scientifically informed approach?

Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.


----------



## sygeek (Jun 27, 2011)

*The Failed Experiment of Software Patents​*


Spoiler



I've noted before that we are witnessing a classic patent thicket in the realm of smartphones, with everyone and his or her dog suing everyone else (and their dog.) But without doubt one of the more cynical applications of intellectual monopolies is Oracle suit against Google. This smacked entirely of the lovely Larry Ellison spotting a chance to extra some money without needing to do much other than point his legal department in the right direction.

If that sounds harsh, take a read of this document from the case that turned up recently. It's Google's response to an Oracle expert witness's estimate of how much the former should be paying the latter:



> Cockburn opines that Google, if found to infringe, would owe Oracle between 1.4 and 6.1 billion dollars -- a breathtaking figure that is out of proportion to any meaningful measure of the intellectual property at issue. Even the low end of Cockburn’s range is over 10 times the amount that Sun Microsystems, Inc. made each year for the entirety of its Java licensing program and 20 times what Sun made for Java-based mobile licensing. Cockburn’s theory is neatly tailored to enable Oracle to finance nearly all of its multi-billion dollar acquisition of Sun, even though the asserted patents and copyrights accounted for only a fraction of the value of Sun.



It does, indeed, sound rather as if Ellison is trying to get his entire purchase price back in a single swoop.

Now, I may be somewhat biased against this action, since it is causing all sorts of problems for the Linux-based Android, and I am certainly not a lawyer, but it does seem to me that the points of Google's lawyers are pretty spot on. For example:



> First, Cockburn has no basis for including all of Google’s revenue from Android phones into the base of his royalty calculation. The accused product here is the Android software platform, which Google does not sell (and Google does not receive any payment, fee, royalty, or other remuneration for its contributions to Android). Cockburn seems to be arguing that Google’s advertising revenue from, e.g., mobile searches on Android devices should be included in the royalty base as a convoyed sale, though he never articulates or supports this justification and ignores the applicable principles under Uniloc and other cases. In fact, the value of the Android software and of Google’s ads are entirely separate: the software allows for phones to function, whether or not the user is viewing ads; and Google’s ads are viewable on any software and are not uniquely enabled by Android. Cockburn’s analysis effectively seeks disgorgement of Google’s profits even though “[t]he determination of a reasonable royalty . . . is based not on the infringer’s profit, but on the royalty to which a willing licensor and a willing licensee would have agreed at the time the infringement began.”



Oracle's expert seems to be adopting the old kitchen-sink approach, throwing in everything he can think of.


> Second, Cockburn includes Oracle’s “lost profits and opportunities” in his purported royalty base. This is an obvious ploy to avoid the more demanding test for recovery of lost profits that Oracle cannot meet. ... Most audaciously, Cockburn tries to import into his royalty base the alleged harm Sun and Oracle would have suffered from so-called “fragmentation” of Java into myriad competing standards, opining that Oracle’s damages from the Android software includes theoretical downstream harm to a wholly different Oracle product. This is not a cognizable patent damages theory, and is unsupported by any precedent or analytical reasoning.



Even assuming that Google has willfully infringed on all the patents that Oracle claims - and that has still to be proved - it's hard to see how Oracle has really lost “opportunities” as a result. If anything, the huge success of Android, based as it is on Java, is likely to increase the demand for Java programmers, and generally make the entire Java ecosystem more valuable - greatly to Oracle's benefit.

So, irrespective of any royalties that may or may not be due, Oracle has in any case already gained from Google's action, and will continue to benefit from the rise of Android as the leading smartphone operating system. Moreover, as Android is used in other areas - tablets, set-top boxes, TVs etc. - Oracle will again benefit from the vastly increased size of the Java ecosystem over which it has substantial control.

Of course, I am totally unsurprised to find Oracle doing this. But to be fair to the Larry Ellison and his company, this isn't just about Oracle, but is also to do with the inherent problems of software patents, which encourage this kind of behavior (not least by rewarding it handsomely, sometimes.)

Lest you think this is just my jaundiced viewpoint, let's turn to recent paper from James Bessen, who is a Fellow of the Berkman Center for Internet and Society at Harvard, and Lecturer at the Boston University School of Law. I've mentioned Bessen several times in this blog, in connection with his book “Patent Failure”, which is a look at the US patent system in general. Here's the background to the current paper, entitled “A Generation of Software Patents”:



> In 1994, the Court of Appeals for the Federal Circuit decided in In re Alappat that an invention that had a novel software algorithm combined with a trivial physical step was eligible for patent protection. This ruling opened the way for a large scale increase in patenting of software. Alappat and his fellow inventors were granted patent 5,440,676, the patent at issue in the appeal, in 1995. That patent expired in 2008. In other words, we have now experienced a full generation of software patents.
> 
> The Alappat decision was controversial, not least because the software industry had been highly innovative without patent protection. In fact, there had long been industry opposition to patenting software. Since the 1960s, computer companies opposed patents on software, first, in their input to a report by a presidential commission in 1966 and then in amici briefs to the Supreme Court in Gottschalk v. Benson in 1972 (they later changed their views). Major software firms opposed software patents through the mid-1990s.6 Perhaps more surprising, software developers themselves have mostly been opposed to patents on software.



That's a useful reminder that the software industry was innovative before there were software patents, and didn't want them introduced. The key question that Bessen addresses in his paper is a good one: how have things panned out in the 15 or so years since software patents have been granted in the US?

Here's what he says happened:



> To summarize the literature, in the 1990s, the number of software patents granted grew rapidly, but these were acquired primarily by firms outside the software industry and perhaps for reasons other than to protect innovations. Relatively few software firms obtained patents in the 1990s and so, it seems that most software firms did not benefit from software patents. More recently, the majority of venture-backed startups do seem to have obtained patents. The reasons for this, however, are not entirely clear and so it is hard to know whether these firms realized substantial positive incentives for investing in innovation from patents. On the other hand, software patents are distinctly implicated in the tripling of patent litigation since the early 1990s. This litigation implies that software patents imposed significant disincentives for investment in R&D for most industries including software.



It is hard to conclude from the above findings that software patents significantly increased R&D incentives in the software industry.

And yet this is one of the reasons that is often given to justify the existence of software patents despite their manifest problems.

Bessen then goes on to look at how things have changed more recently:



> most software firms still do not patent, although the percentage has increased. And most software patents go to firms outside the software industry, despite the industry’s substantial role in software innovation. While the share of patents going to the software industry has increased, that increase is largely the result of patenting by a few large firms.



Again, this gives the lie to the claim that software patents are crucial for smaller companies in order to protect their investments; instead, the evidence is that large companies are simply building up bigger and bigger patent portfolios, largely for defensive purposes, as Bessen notes in his concluding remarks:



> Has the patent system adapted to software patents so as to overcome initial problems of too little benefit for the software industry and too much litigation? The evidence makes it hard to conclude that these problems have been resolved. While more software firms now obtain patents, most still do not, hence most software firms do not directly benefit from software patents. Patenting in the software industry is largely the activity of a few large firms. These firms realize benefits from patents, but the incentives that patents provide them might well be limited because these firms likely have other ways of earning returns from their innovations, such as network effects and complementary services. Moreover, anecdotal evidence suggests that some of these firms patent for defensive reasons, rather than to realize rents on their innovations: Adobe, Oracle and others announced that patents were not necessary in order to promote innovation at USPTO hearings in 1994, yet they now patent heavily.
> 
> On the other hand, the number of lawsuits involving software patents has more than tripled since 1999. This represents a substantial increase in litigation risk and hence a disincentive to invest in innovation. The silver lining is that the probability that a software patent is in a lawsuit has stopped increasing and might have begun a declining trend. This occurred perhaps in response to a new attitude in the courts and several Supreme Court decisions that have reined in some of the worst excesses related to software patents.



These comments come from an academic who certainly has no animus against patents. They hardly represent a ringing endorsement, but emphasize, rather, that very little is gained by granting such intellectual monopolies. Careful academic work like this, taken together with the extraordinary circus we are witnessing in the smartphone arena, strengthens the case for calling a halt now to the failed experiment of software patents.


----------



## sygeek (Jul 15, 2011)

*The PicoLisp Ticker*

*The PicoLisp Ticker​*


Spoiler



Around end of May, I was playing with an algorithm I had received from Bengt Grahn, many years ago. A small program - it was even part of the PicoLisp distribution ("misc/crap.l") for many years - which when given an arbitrary sample text in some language, produces an endless stream of pseudo-text which strongly resembles that language.

It was fun, so I decided to set up a PicoLisp "Ticker" page, producing a stream of "news": *ticker.picolisp.com

The source for the server is simple: 

```
(allowed ()
      *Page "!start" "@lib.css" "ticker.zip" )

   (load "@lib/http.l" "@lib/xhtml.l")
   (load "misc/crap.l")

   (one *Page)

   (de start ()
      (seed (time))
      (html 0 "PicoLisp Ticker" "@lib.css" NIL
         (<h2> NIL "Page " *Page)
         (<div> 'em50
            (do 3 (<p> NIL (crap 4)))
            (<spread>
               (<href> "Sources" "ticker.zip")
               (<this> '*Page (inc *Page) "Next page") ) ) ) )

   (de main ()
      (learn "misc/ticker.txt") )

   (de go ()
      (server 21000 "!start") )
```

The sample text for the learning phase, "misc/ticker.txt", is a plain text version of the PicoLisp FAQ. The complete source, including the text generator, can be downloaded via the "Sources" link as "ticker.zip".

Now look at the "Next page" link, appearing on the bottom right of the page. It always points to a page with a number one greater than the current page, providing an unlimited supply of ticker pages.

I went ahead, and installed and started the server. To get some logging, I inserted the line 

```
(out 2 (prinl (stamp) " {" *Url "} Page " *Page "  [" *Adr "] " *Agent))/PHP]

at the beginning of the 'start' function.

On June 18th I announced it on Twitter, and watched the log files. Immediately, within one or two seconds (!), Googlebot accessed it: 
[PHP]   2011-06-18 11:22:04  Page 1  [66.249.71.139] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
```

Wow, I thought, that was fast! Don't know if this was just by chance, or if Google always has such a close watch on Twitter.

Anyway, I was curious about what the search engine would do with such nonsense text, and how it would handle the infinite number of pages. During the next seconds and minutes, other bots and possibly human users accessed the ticker: 

```
2011-06-18 11:22:08  Page 1  [65.52.23.76] Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)
   2011-06-18 11:22:10  Page 1  [65.52.4.133] Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)
   2011-06-18 11:22:20  Page 1  [50.16.239.111] Mozilla/5.0 (compatible; Birubot/1.0) Gecko/2009032608 Firefox/3.0.8
   2011-06-18 11:29:52  Page 1  [174.129.42.87] Python-urllib/2.6
   2011-06-18 11:30:34  Page 1  [174.129.42.87] Python-urllib/2.6
   2011-06-18 11:33:54  Page 1  [89.151.99.92] Mozilla/5.0 (compatible; MSIE 6.0b; Windows NT 5.0) Gecko/2009011913 Firefox/3.0.6 TweetmemeBot
   2011-06-18 11:33:54  Page 1  [89.151.99.92] Mozilla/5.0 (compatible; MSIE 6.0b; Windows NT 5.0) Gecko/2009011913 Firefox/3.0.6 TweetmemeBot
   2011-06-18 13:47:21  Page 1  [190.175.174.220] Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
   2011-06-18 13:49:13  Page 2  [190.175.174.220] Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
   2011-06-18 13:49:21  Page 3  [190.175.174.220] Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
   2011-06-18 19:43:36  Page 1  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:43:54  Page 2  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:11  Page 3  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:13  Page 4  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:16  Page 5  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:18  Page 6  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
   2011-06-18 19:44:20  Page 7  [24.167.162.218] Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30
```

Mr. Google came back the following day: 

```
2011-06-19 00:25:57  Page 2  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 01:03:13  Page 3  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 01:35:57  Page 4  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 02:39:19  Page 5  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 03:43:39  Page 6  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-06-19 04:17:02  Page 7  [66.249.67.197] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
```

In between (not shown here) were also some accesses, probably by non-bots, who usually gave up after a few pages.

Mr. Google, however, assiduously went through "all" pages. The page numbers increased sequentially, but he also re-visited page 1, going up again. Now there were several indexing threads, and by June 23rd the first one exceeded page 150.

I felt sorry for poor Googlebot, and installed a "robots.txt" the same day, disallowing the ticker page for robots. I could see that several other bots fetched "robots.txt". But not Google. Instead, it kept following the pages of the ticker.

Then, finally, on July 5th, Googlebot looked at "robots.txt": 

```
"robots.txt" 2011-07-05 07:03:05 Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html) ticker.picolisp.com
   "robots.txt: disallowed all"
```

The indexing, however, went on. Excerpt: 

```
2011-07-05 04:27:46 {!start} Page 500  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 04:58:50 {!start} Page 501  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 05:30:24 {!start} Page 502  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 06:02:10 {!start} Page 503  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 06:32:14 {!start} Page 504  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 07:02:41 {!start} Page 505  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 08:02:31 {!start} Page 506  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 08:45:52 {!start} Page 507  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 09:20:06 {!start} Page 508  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
   2011-07-05 09:51:49 {!start} Page 509  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
```

Strange. I would have expected the indexing to stop after Page 505.

In fact, all other robots seem to obey "robots.txt". Mr. Google, however, even started a new thread five days later again: 

```
2011-07-10 02:22:52 {!start} Page 1  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
```

I should feel flattered, if the PicoLisp news ticker is so interesting!

How will that go on? As of today, we have reached 

```
2011-07-15 09:42:36 {!start} Page 879  [66.249.71.203] Mozilla/5.0 (compatible; Googlebot/2.1; +*www.google.com/bot.html)
```

I'll stay tuned ...


----------



## nisargshah95 (Jul 16, 2011)

Articles worth a read buddy. Keep it up...


----------



## sygeek (Jul 30, 2011)

*Before Python​*By Gudio van Rossum​


Spoiler



This morning I had a chat with the students at Google's CAPE program. Since I wrote up what I wanted to say I figured I might as well blog it here. Warning: this is pretty unedited (or else it would never be published . I'm posting it in my "personal" blog instead of the "Python history" blog because it mostly touches on my career before Python. Here goes.

Have you ever written a computer program? Using which language?

HTML
Javascript
Java
Python
C++
C
Other - which?
[It turned out the students had used a mixture of Scratch, App Inventor, and Processing. A few students had also used Python or Java.]

Have you ever invented a programming language? 

If you have programmed, you know some of the problems with programming languages. Have you ever thought about why programming isn't easier? Would it help if you could just talk to your computer? Have you tried speech recognition software? I have. It doesn't work very well yet. 

How do you think programmers will write software 10 years from now? Or 30? 50?

Do you know how programmers worked 30 years ago?

I do.

I was born in Holland in 1956. Things were different.

I didn't know what a computer was until I was 18. However, I tinkered with electronics. I built a digital clock. My dream was to build my own calculator.

Then I went to university in Amsterdam to study mathematics and they had a computer that was free for students to use! (Not unlimited though. We were allowed to use something like one second of CPU time per day. 

I had to learn how to use punch cards. There were machines to create them that had a keyboard. The machines were as big as a desk and made a terrible noise when you hit a key: a small hole was punched in the card with a huge force and great precision. If you made a mistake you had to start over.

I didn't get to see the actual computer for several more years. What we had in the basement of the math department was just an end point for a network that ran across the city. There were card readers and line printers and operators who controlled them. But the actual computer was elsewhere.

It was a huge, busy place, where programmers got together and discussed their problems, and I loved to hang out there. In fact, I loved it so much I nearly dropped out of university. But eventually I graduated.

Aside: Punch cards weren't invented for computers; they were invented for sorting census data and the like before WW2. [UPDATE: actually much earlier, though the IBM 80-column format I used did originate in 1928.] There were large mechanical machines for sorting stacks of cards. But punch cards are the reason that some software still limits you (or just defaults) to 80 characters per line.

My first program was a kind of "hello world" program written in Algol-60. That language was only popular in Europe, I believe. After another student gave me a few hints I learned the rest of the language straight from the official definition of the language, the "Revised Report on the Algorithmic Language Algol-60." That was not an easy report to read! The language was a bit cumbersome, but I didn't mind, I learned the basics of programming anyway: variables, expressions, functions, input/output.

Then a professor mentioned that there was a new programming language named Pascal. There was a Pascal compiler on our mainframe so I decided to learn it. I borrowed the book on Pascal from the departmental library (there was only one book, and only one copy, and I couldn't afford my own). After skimming it, I decided that the only thing I really needed were the "railroad diagrams" at the end of the book that summarized the language's syntax. I made photocopies of those and returned the book to the library.

Aside: Pascal really had only one new feature compared to Algol-60, pointers. These baffled me for the longest time. Eventually I learned assembly programming, which explained the memory model of a computer for the first time. I realized that a pointer was just an address. Then I finally understood them.

I guess this is how I got interested in programming languages. I learned the other languages of the day along the way: Fortran, Lisp, Basic, Cobol. With all this knowledge of programming, I managed to get a plum part-time job at the data center maintaining the mainframe's operating system. It was the most coveted job among programmers. It gave me access to unlimited computer time, the fastest terminals (still 80 x 24 though , and most important, a stimulating environment where I got to learn from other programmers. I also got access to a Unix system, learned C and shell programming, and at some point we had an Apple II (mostly remembered for hours of playing space invaders). I even got to implement a new (but very crummy) programming language!

All this time, programming was one of the most fun things in my life. I thought of ideas for new programs to write all the time. But interestingly, I wasn't very interested in using computers for practical stuff! Nor even to solve mathematical puzzles (except that I invented a clever way of programming Conway's Game of Life that came from my understanding of using logic gates to build a binary addition circuit).

What I liked most though was write programs to make the life of programmers better. One of my early creations was a text editor that was better than the system's standard text editor (which wasn't very hard . I also wrote an archive program that helped conserve disk space; it was so popular and useful that the data center offered it to all its customers. I liked sharing programs, and my own principles for sharing were very similar to what later would become Open Source (except I didn't care about licenses -- still don't .

As a term project I wrote a static analyzer for Pascal programs with another student. Looking back I think it was a horrible program, but our professor thought it was brilliant and we both got an A+. That's where I learned about parsers and such, and that you can do more with a parser than write a compiler.

I combined pleasure with a good cause when I helped out a small left-wing political party in Holland automate their membership database. This was until then maintained by hand as a collection of metal plates plates into which letters were stamped using an antiquated machine not unlike a steam hammer . In the end the project was not a great success, but my contributions (including an emulation of Unix's venerable "ed" editor program written in Cobol) piqued the attention of another volunteer, whose day job was as computer science researcher at the Mathematical Center. (Now CWI.)

This was Lambert Meertens. It so happened that he was designing his own programming language, named B (later ABC), and when I graduated he offered me a job on his team of programmers who were implementing an interpreter for the language (what we would now call a virtual machine).

The rest I have written up earlier in my Python history blog.


----------



## Nipun (Jul 30, 2011)

Nice......


----------



## sygeek (Aug 4, 2011)

*When patents attack Android*
By David Drummond​


Spoiler



I have worked in the tech sector for over two decades. Microsoft and Apple have always been at each other’s throats, so when they get into bed together you have to start wondering what's going on. Here is what’s happening:

Android is on fire. More than 550,000 Android devices are activated every day, through a network of 39 manufacturers and 231 carriers. Android and other platforms are competing hard against each other, and that’s yielding cool new devices and amazing mobile apps for consumers.

But Android’s success has yielded something else: a hostile, organized campaign against Android by Microsoft, Oracle, Apple and other companies, waged through bogus patents.

They’re doing this by banding together to acquire Novell’s old patents (the “CPTN” group including Microsoft and Apple) and Nortel’s old patents (the “Rockstar” group including Microsoft and Apple), to make sure Google didn’t get them; seeking $15 licensing fees for every Android device; attempting to make it more expensive for phone manufacturers to license Android (which we provide free of charge) than Windows Mobile; and even suing Barnes & Noble, HTC, Motorola, and Samsung. Patents were meant to encourage innovation, but lately they are being used as a weapon to stop it.

A smartphone might involve as many as 250,000 (largely questionable) patent claims, and our competitors want to impose a “tax” for these dubious patents that makes Android devices more expensive for consumers. They want to make it harder for manufacturers to sell Android devices. Instead of competing by building new features or devices, they are fighting through litigation.

This anti-competitive strategy is also escalating the cost of patents way beyond what they’re really worth. Microsoft and Apple’s winning $4.5 billion for Nortel’s patent portfolio was nearly five times larger than the pre-auction estimate of $1 billion. Fortunately, the law frowns on the accumulation of dubious patents for anti-competitive means — which means these deals are likely to draw regulatory scrutiny, and this patent bubble will pop.

We’re not naive; technology is a tough and ever-changing industry and we work very hard to stay focused on our own business and make better products. But in this instance we thought it was important to speak out and make it clear that we’re determined to preserve Android as a competitive choice for consumers, by stopping those who are trying to strangle it.

We’re looking intensely at a number of ways to do that. We’re encouraged that the Department of Justice forced the group I mentioned earlier to license the former Novell patents on fair terms, and that it’s looking into whether Microsoft and Apple acquired the Nortel patents for anti-competitive means. We’re also looking at other ways to reduce the anti-competitive threats against Android by strengthening our own patent portfolio. Unless we act, consumers could face rising costs for Android devices — and fewer choices for their next phone.


----------



## sygeek (Aug 5, 2011)

*We were raised by the Valley​*By Pablo Villalba​


Spoiler



*My romanian friend*

Romanian and Spanish are quite different languages. I’m spanish and it’s almost impossible to understand for me, since I have never studied it.

Some years ago I met a group of romanians who were in Spain for their first time. I was surprised to see they could understand Spanish quite well, even if they could barely speak it. They had never studied Spanish before and I was very puzzled. How could it be that they understood me while I couldn’t understand them?

My friend explained: As a child, she had been spent long hours watching latin american soap operas on TV. These shows had spanish audio and romanian subtitles. Because she was a child, like many others, she just picked it up naturally from hearing it – but she never learned how to speak in Spanish.

She was raised by her environment and close family, but while she watched those shows she learned about a completely different language.

This story is not about romanian and spanish, or a (perhaps rare) case of somebody learning a language by accident. I understood the meaning of this story much later, when I looked back at my first years with a computer.

*A kid with a computer*

Your parents and teachers are just some of your influences. You have also been raised in a way by Disney, Hollywood, TV series. I was, like many others, raised by the Valley.

As a kid I would play video games and enjoy the quirky humor from LucasArts. Then my father would set up QBasic for me and help me get started write my own games. I’d hop into the IRC to learn and share with others. I’d play online and meet like-minded people, and learn their language and style by imitation. I’d read and learn about just about everything fun I could get my hands on. I’d read Slashdot and embrace the open-source anti-Microsoft ideas. I’d learn web design and try to build something like others were building out there. And I’d dream of growing up and having a game development startup.

Unknowingly, I was growing up into a culture that wasn’t my immediate environment. And I felt incredibly at home. That doesn’t mean I got disconnected from my surroundings, I just had a connection with that new world, with its trends and stories and memes.

*The pilgrims*

I was 24 the first time I went to the Valley. As I walked through San Francisco and met people there, I couldn’t help having a déja-vu feeling. It was like I had already been there a long time ago, like the city had been waiting for me all those years.

As Nolan Bushnell gave me a ride on his car through the city, I was thinking about this: all the kids who grew up in their little towns hacking on their computers would someday do their pilgrimage and meet each other here. The Valley had raised us all, and we were finally coming back home.

*blog.teambox.com/wp-content/uploads/2011/08/spectre-town-big-fish.jpeg​


----------



## sygeek (Sep 15, 2011)

*I'm a phony. Are you?​*_By Scott Hanselman_​


Spoiler



_pho·ny also *pho·ney* (fō'nē) adj. *pho·ni·er, pho·ni·est*_
_
*1.*
*a.* Not genuine or real; counterfeit: a phony credit card.
*b.* False; spurious: a phony name.

*2.* Not honest or truthful; deceptive: a phony excuse.

*3.*
*a.* Insincere or hypocritical.
*b.* Giving a false impression of truth or authenticity; specious._

Along with my regular job at Microsoft I also mentor a number of developers and program managers. I spoke to a young man recently who is extremely thoughtful and talented and he confessed he was having a crisis of confidence. He was getting stuck on things he didn't think he should be getting stuck on, not moving projects forward, and it was starting to seep into his regular life.

He said:

_    "Deep down know I’m ok. Programming since 13, graduated top of CS degree, got into Microsoft – but [I feel like I'm] an imposter."_

I told him, straight up, *You Are Not Alone*.

For example, I've got 30 domains and I've only done something awesome with 3 of them. Sometimes when I log into my DNS manager I just see 27 failures. I think to myself, there's 27 potential businesses, 27 potential cool open source projects just languishing. If you knew anything you'd have made those happen. What a phony.

I hit Zero Email a week ago, now I'm at 122 today in my Inbox and it's stressing me out. And I teach people how to manage their inboxes. What a phony.

When I was 21 I was untouchable. I thought I was a gift to the world and you couldn't tell me anything. The older I get the more I realize that I'm just never going to get it all, and I don't think as fast as I used to. What a phony.

I try to learn a new language each year and be a Polyglot Programmer but I can feel F# leaking out of my head as I type this and I still can't get my head around really poetic idiomatic Ruby. What a phony.

I used to speak Spanish really well and I still study Zulu with my wife but I spoke to a native Spanish speaker today and realize I'm lucky if I can order a burrito. I've all but forgotten my years of Amharic. My Arabic, Hindi and Chinese have atrophied into catch phrases at this point. What a phony. (Clarification: This one is not intended as a humblebrag. I was a linguist and languages were part of my identity and I'm losing that and it makes me sad.)

But here's the thing. We all feel like phonies sometimes. We are all phonies. That's how we grow. We get into situations that are just a little more than we can handle, or we get in a little over our heads. Then we *can* handle them, and we *aren't* phonies, and we move on to the next challenge.

The idea of the Imposter Syndrome is not a new one. 

_    Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be._

The opposite of this is even more interesting, the Dunning-Kruger effect. You may have had a manager or two with this issue. 

_    The Dunning–Kruger effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to recognize their mistakes._

It's a great read for a Wikipedia article, but here's the best line and the one you should remember.

*    ...people with true ability tended to underestimate their relative competence.*

I got an email from a podcast listener a few years ago. I remembered it when writing this post, found it in the archives and I'm including some of it here *with emphasis mine*.

_I am a regular listener to your podcast and have great respect for you.  With that in mind, *I was quite shocked to hear you say on a recent podcast, "Everyone is lucky to have a job" and apply that you include yourself in this sentiment.*

    I have heard developers much lesser than your stature indicate a much more healthy (and accurate) attitude that they feel they are good enough that they can get a job whenever they want and so it's not worth letting their current job cause them stress.  Do you seriously think that you would have a hard time getting a job or for that matter starting your own business?  *If you do, you have a self-image problem that you should seriously get help with. *

    But it's actually not you I'm really concerned about... it's your influence on your listeners.  *If they hear that you are worried about their job, they may be influenced to feel that surely they should be worried. *_

I really appreciated what this listener said and emailed him so. Perhaps my attitude is a Western Cultural thing, or a uniquely American one. I'd be interested in what you think, Dear Non-US Reader. I maintain that most of us feel this way sometimes. Perhaps we're unable to admit it. When I see programmers with blog titles like "I'm a freaking ninja" or "bad ass world's greatest programmer" I honestly wonder if they are delusional or psychotic. Maybe they just aren't very humble.

I stand by my original statement that I feel like a phony sometimes. Sometimes I joke, "Hey, it's a good day, my badge still works" or I answer "How are you?" with "I'm still working." I do that because it's true. I'm happy to have a job, while I could certainly work somewhere else. Do I need to work at Microsoft? Of course not. I could probably work anywhere if I put my mind to it, even the IT department at Little Debbie Snack Cakes. I use insecurity as a motivator to achieve and continue teaching.

I asked some friends if they felt this way and here's some of what they said.


_
[*]Totally! Not. I've worked hard to develop and hone my craft, I try to be innovative, and deliver results.
[*]    Plenty of times! Most recently I started a new job where I've been doing a lot of work in a language I'm rusty in and all the "Woot I've been doing 10 years worth of X language" doesn't mean jack. Very eye opening, very humbling, very refreshing
[*]    Quite often actually, especially on sites like stack overflow. It can be pretty intimidating and demotivating at times. Getting started in open source as well. I usually get over it and just tell myself that I just haven't encountered a particular topic before so I'm not an expert at it yet. I then dive in and learn all I can about it.
[*]    I always feel like a phony just biding my time until I'm found out. It definitely motivates me to excel further, hoping to outrun that sensation that I'm going to be called out for something I can't do
[*]    Phony? I don't. If anything, I wish I was doing more stuff on a grander scale. But I'm content with where I am now (entrepreneurship and teaching).
[*]    I think you are only a phony when you reflect your past work and don't feel comfortable about your own efforts and achievements.
[*]    Hell, no. I work my ass off. I own up to what I don't know, admit my mistakes, give credit freely to other when it's due and spend a lot of time always trying to learn more. I never feel like a phony.
[*]    Quite often. I don't truly think I'm a phony, but certainly there are crises of confidence that happen... particularly when I get stuck on something and start thrashing.
_

There are some folks who totally have self-confidence. Of the comment sample above, there are three "I don't feel like a phony" comments. But check this out: two of those folks aren't in IT. Perhaps IT people are more likely to have low self-confidence?

The important thing is to recognize this: If you are reading this or any blog, writing a blog of your own, or working in IT, you are probably in the top 1% of the wealth in the world. It may not feel like it, but you are very fortunate and likely very skilled. There are a thousand reasons why you are where you are and your self-confidence and ability are just one factor. It's OK to feel like a phony sometimes. It's healthy if it's moves you forward.

I'll leave you with this wonderful comment from Dave Ward:

_ *I think the more you know, the more you realize just how much you don't know*. So paradoxically, the deeper down the rabbit hole you go, the more you might tend to fixate on the growing collection of unlearned peripheral concepts that you become conscious of along the way.

*    That can manifest itself as feelings of fraudulence when people are calling you a "guru" or "expert" while you're internally overwhelmed by the ever-expanding volumes of things you're learning that you don't know.*

    However, I think it's important to tamp those insecurities down and continue on with confidence enough to continue learning. After all, you've got the advantage of having this long list of things you know you don't know, whereas most people haven't even taken the time to uncover that treasure map yet. What's more, no one else has it all figured out either. *We're all just fumbling around in the adjacent possible, grasping at whatever good ideas and understanding we can manage to wrap our heads around*._

Tell me your stories in the comments. We're also discussing this on this Google+ thread.

And remember, "Fake it til' you make it."


----------



## Who (Sep 15, 2011)

I am sticking this thread as i personally feel the articles here are very good , i also request other people to contribute  here , i would have moved it to OSS & programming section but i think articles are in a broader category , so community discussion seems fine at the moment but feel free to make any suggestions, thank you.


----------



## Nipun (Sep 15, 2011)

Who said:


> I am sticking this thread as i personally feel the articles here are very good , i also request other people to contribute  here , i would have moved it to OSS & programming section but i think articles are in a broader category , so community discussion seems fine at the moment but feel free to make any suggestions, thank you.


Thats great..! 

The articles are really very good, sygeek!


----------



## sygeek (Sep 21, 2011)

*What Netflix Could Have Said This Week​*


Spoiler



In the 14 years since we started Netflix, we’ve gained more than 25 million customers worldwide by providing the best DVD by mail service anywhere. Along the way, we’ve built an unrivaled streaming service that continues to grow every day. Today we want to tell you about some big changes at Netflix as we get ready for the future.

To begin, we’re adding a video games upgrade option to our DVD by mail service. Similar to our upgrade option for Blu-ray, DVD members can now rent Wii, PS3 and Xbox 360 games. This is something we’ve been asked to do for years, and we’re pleased to finally provide it.

As we worked on this new addition, we could no longer deny that the DVD and streaming services are growing apart very quickly. These are in fact two very different businesses, with different customer needs. We even have different offices for them! Providing an experience that simultaneously addresses these two very different worlds has become an increasing challenge for us as we grow the company and evolve the website.

That’s why today we’re announcing significant changes to our company. First, we are renaming the DVD by mail business to Netflix Classic. This is the same DVD rental service you’re used to, but it’s more than just a name: Netflix Classic is a new company, operating independently as a subsidiary of Netflix.

Moving forward, Netflix as a company will be dedicated to streaming media. This is a realization of our original vision, and of the company’s name: watching movies over the Internet. The Netflix.com website and mobile apps will exclusively service our streaming library. DVD members will manage their queues at classic.netflix.com.

If you subscribe to both services, you’ll see two charges on your credit card instead of one, but you’ll pay the same total amount per month you do now. This, along with our recent pricing changes, is just a necessary outcome from creating two separate companies. DVD members will of course still receive the same red Netflix envelope that has been familiar to them all these years.

Members can log into both sites using the same login. This will allow streaming-only members to add DVD by mail, and DVD-only members to upgrade to streaming, at any time. The websites, however, will remain separate, so that we can start giving these different worlds the unique attention they deserve.

We think the benefits are going to be huge. We’ll be rolling out the new websites in a few weeks, and you’ll see right away what we’re able to accomplish by providing a dedicated experience for each service. Until then, check out this video we made to see just a few examples of the new sites in action.

We want to thank you for supporting us for all these years, and we are very excited about all the new benefits Netflix and Netflix Classic are about to bring you. We can’t wait for you to try it out yourselves.


----------



## sygeek (Sep 26, 2011)

*How Quake changed my life forever.​*


Spoiler



*4.bp.blogspot.com/-1KxMyL6gix0/Tn4QuO-LAcI/AAAAAAAADkE/hlYRs7fL9BY/s400/Quaked.JPG​
Recently I saw this article on Rock Paper Shotgun and realized, wow I am not the only person in the world who has had his life changed by one video game. Even more interesting, this person had a bit of a life altering experience thanks to the same game.

I realize how ridiculous it sounds to say "Quake changed my life," but it honestly did.

So let's go back to 1996, I am 24 years old and living with two really great friends, one is a software engineer just out of Case Western Reserve University, and the other is a successful entrepreneur/electronics buff/all around PC geek who rebuilds terminals, resells them, and makes a good living doing it.

What am I doing? Oh I am a guy who barely graduated high school, matter of fact I don't think I legitimately earned my high school diploma...I think someone at the school decided I should just be let out into the world. I am a glorified painter, who calls himself an artist, with no formal art education who gets to work on murals from time to time.

I can recall being in high school and sitting with my guidance counselor after a particularly lack luster semester's performance. I am sure she had good intentions, and was trying the best she could to motivate me. She told me, if I did not get better grades and buckle down I might wind up homeless on the streets. I of course found this to be a bit shocking, and being so close to graduation I pretty much assumed I was going to be relegated to a job no one would like and doing my drawings when I had the spare time.

I always loved to draw, listen to hard rock/metal, read too many violent Dark Horse/Lobo Comics, and had an insane/nerdy fascination with Star Trek, Star Wars, Aliens, Predator, Terminator, Lord of the Rings, and just about any dark Sci-Fi/Fantasy style movie or book you could imagine. The kind of stuff that most people think is really cool now, but would immediately relegate you to punching bag status, and honestly not very cool with the chicks back then.

Following graduation I was an obnoxious guy, loud and just trying to have a good time all the while working as a tradesman in residential painting. Most of my days included at least 8 hours with some of the most right wing conservative Christian people you could possibly imagine. Using the word **** could get a "you need to spend more time with Jesus." reaction. It was an odd experience sometimes, but a few of these guys were very generous and provided me, a kid who had very little direction, with patience, a steady paycheck, solid work ethic, an understanding of quality craftsmanship, and the ability to live on my own and help my then fiance, now wife of 13 years through her undergrad. I don't know that I helped her so much as I bought the beer and food, she is incredibly motivated and far more intelligent than I am, she really didn't need any helping.

So I am living with my two buddies Nick and Pete, they both have PCs and I am saving up my cash for a PC so I can play Doom and Dune II guilt free, but I never seem to be able to put the coins together in a timely enough manner to purchase one. I feel guilty because I am always logging hours on their machines, and feel like I am annoying...but the pull of video games is so strong.

Sometimes the painting work dries up and I have to sit home for a few days, or a week in some cases and eat into my savings to pay the bills. Another time the car dies and I manage to convince myself to take on a payment for a pickup truck I could barely afford...in other words, the PC is not getting bought and spare cash is not readily available.

On a Friday night Nick comes home and says to Pete, hey I have QTest and they immediately go down the basement and magically "install" QTest on Pete's 486DX2, the fastest of their 2 machines at that time. I was hooked, from the very first moment I was given the keyboard and played Quake I was completely hooked. I say "magically installed" because at this point Windows 95 is just out, and everything has to be run through Dos commands which baffles me because I understand nothing about file structure, paths, or how a PC even works. I learn enough to launch Quake and change my yawspeed (this is before I knew about +mlook.)

What I did know was that Quake was so atmospheric, moody, and scary that I quickly forgot about Doom. I would turn off the lights in the basement and play QTest for hours after my roommates had gone off to bed and my wife was upstairs diligently studying towards her Bachelors.

A few months went by and Quake finally arrived on store shelves. I still did not have my own PC, and to my shock and dismay a 486DX2 would not run Quake fullscreen MP very well. If I wanted to get the full Quake experience I was going to have to shell out more money than I could imagine to get a new Pentium class PC.

Thankfully my roommate was kind enough to allow me to play Quake on his new Pentium machine from time to time, and the 486DX2 was just fast enough to run Quake MP in a window about the size of a postage stamp...which is what I did.

I played Quake on that 486 with the window the size of a stamp for hours, I played SP and I played MP. It did not matter that the game world was tiny on that 15 inch monitor, I just needed that window into the world of Quake. My imagination filled in all the gaps, I was so engaged with the setting I came up with my own stories in my head of what might be going on in that world. id had left the story incredibly vague, so the world of Quake was this creepy place that I imagined my own stories around and layered ideas on top of.

After about a year or two the lease on our house was up, and one of my friends built himself a new house, while the other moved into an apartment. I decided to move back in with my parents to save some cash for my upcoming wedding, and eventually I was able to get myself a PC, but only by caving in and putting the cost on a credit card I should not have put the charge on. This of course only led me deeper down the rabbit hole of Quake and Ben Morris' Worldcraft level editor. (Some of you might know it as Hammer these days as Valve purchased the rights to it from Ben ages ago.)

I still recall the day I sat in my room with my younger brother and fired up a Quake level editor for the first time. I think it was either Qed, or Qoole... I sat there and stared at what looked like the most complicated user interface I had ever seen. Now you have to remember, this is the guy who barely made it through high school. I don't know that I ever took a math course more advanced than Algebra, and I forgot anything I learned the moment I left the room each day.

Suddenly I am sitting here staring at 4 windows, X, Y, Z, and something that looks like the player view in Quake. The only thing I could think at the time was, WTF is X, Y, Z?

Thank god my younger brother Josh, who shares my addiction to video games to this day, happened to be sitting there with me. Thank god Josh actually paid attention in school, took a geometry course, and thank god he had no fear of experimenting with the software at all. If he had not been there that day, I might have closed the editor and never opened it again, just figuring I wasn't smart enough for this computer games stuff, so I'll just go back to Deathmatch.

Instead the two of us sat there for hours, figuring out what a brush was, how to put a texture on it, how to place lights in the scene, how to get a monster in the game (a wireframe box in the editor) I still recall the first time we compiled a box room and saw a big error message that said "Leaked." We thought for sure our box wouldn't work, but it did...so we ignored that "Leaked Crap" for now. At the start of it all, we did it together.

At some point late into the night, I got tired and had to work the next day; I was forced to go to sleep. I woke up at 7am the next morning to see my younger brother still sitting at the PC looking weary, tired, and incredibly addicted. He never went to bed, and had built what looked like a crazy spiral stair case to hell...with jumping Shamblers in a lava pit at the bottom.

We were both hooked...I was pissed I had to go to work and he was probably going to sit at my computer all day and build maps. I was pissed of course at my situation, not him.

For the next year or two I spent every moment of my free time in Worldcraft making Quake maps. My younger brother went off to school and his interest in Quake Mappery died off with his responsibility to classes and lack of a PC to work on. Some members of my family seemed to get annoyed with me, with the exception of my wife, over my new addiction. My wife was incredibly supportive and allowed me the time other women would demand of their partner to edit Quake Maps.

I found myself sketching levels, drawing out floor plans, coming up with ways to create new traps and trying to figure out how to properly trigger events and get solid gameplay going. Around the same time I got married, moved into an apartment with my new wife, and was getting very tired of the kinds of silliness going on at my painting job.

It seemed the more time I spent on the PC, interacting with people online, soaking up all of the Quake/PC knowledge I could find, the more lame my regular job that consumed 8 hours of my day felt. Each day was a struggle to get out of bed and pull myself away from what I was loving, only to go into work to do something that began to feel less and less valuable. I was also working for some incredibly wealthy people and it quickly became apparent to me that there were socio-economic/class issues I did not like about being a decorative artist/painter.

It took about two years for me to get the nerve to post one of my levels online. By this point Quake 2 was out, dial-up was in, and I was knee deep in the Quake 2 engine and assets. All the while feeling like the heart and soul of Quake 2 was missing and lacked something visceral and intense. I released my first Quake 2 level titled Retaliatory Strike, and expected harsh criticism. I was scared to death people would not like it, and feared the negative feedback I would receive.

My fears proved to be unfounded and I was pretty happy with of all the kind words being said about my work. I found the positive reviews and emails people sent telling me how much they liked my maps to be incredibly rewarding. In fact it was far more rewarding than any paycheck after a week of filling nail holes and caulking cracks could be. This eventually led me to lose my fear entirely and polish up some of my Quake maps/ideas and put them out there for people to play. All the while slowly realizing that if I could, I would spend all day and all night working on my maps. I had to tear myself away in order to make sure to give my wife, friends and family the attention they deserved.

I still recall the day my wife came home and said, we have saved up a good chunk of money, maybe we should start looking at houses. I thought this could be cool, lots of our friends have houses and they seem happy with them. I of course had no idea how much a house actually costs. We decided to make an appointment with the bank and get an idea if we were even able to get a home loan. Up until this point the only loan I had was for my Pickup Truck, and I think I had some unrealistic expectations of how long it takes to pay for a house.

The day we went to the bank was an especially nasty afternoon at work. When the bank representative started talking about 30 year mortgage rates I had the realization then and there; I could not possibly spend the rest of my life doing something I woke up and dreaded going to do each and every day.

It still had not occurred to me I could actually do games for a living...I barely graduated high school. Even my guidance counselor had assured me, if I didn't get good grades I wouldn't be able to do anything with myself. I was destined to the trades, or a fast food restaurant. I just was not smart enough.

Thankfully my wife did not share my sentiments. She encouraged me to look into schools, people make video games for a living and why on earth couldn't I do the same thing? Thousands of people were downloading my Quake levels and a few of them were sending me emails to say how much they liked them. Looking back it's a good thing she did not see the limitations I saw for myself, my wife picked me up and pushed me to try something I did not think I was capable of. Months later I was enrolled part time in a local community college, and began a long 6 year journey to my eventual 5 year BFA from the Cleveland Institute of Art, which eventually led me to a Cleveland area post production facility, EA Chicago, Raven, and finally Epic Games.

So Quake really did change my life. Quake was an approachable piece of technology, and the tools were simple enough for an artist with enough persistence to struggle through and learn the ropes. The fact that the engine and resources were open gave me the ability to see assets created by the original creators, as well as all of the additional content being churned out online. I could experiment and bring my own ideas to life and there was an entire sub-culture and community online line which supported and surrounded this pursuit.

Quake really did help change my life. It taught me if I wanted something bad enough I had to get out there and do it myself. It taught me how to type so I could communicate online, it taught me how to seek out information and familiarized me with the inner workings of a PC. Most of all I learned not to give up, to push myself to learn more and more each day, and that I was smart enough to do something other than paint or work at a hardware store.

Most important of all, my wife taught me that I was smart enough, and that the limits I saw for myself based on what I had been told when I was younger simply was not truthful. My time in the trades as a painter taught me that hard work, a strong work ethic, and persistence are sometimes worth more than any 101 course at an early age, when I may not have known exactly what it was I wanted to be doing after college.

My counselor had been mistaken, a few years later I would be paying for an apartment in Chicago, and a Mortgage in Cleveland. I was not homeless after all, at that time I had two homes and was flying back and forth between them on weekends.

My path to being an Effects Artist at Epic Games is by no means a straight one. I originally wanted to be a level designer or environment artist. There were plenty of challenges along the way, but I think it is obvious to me now. If I can make it from being a directionless, lost, obnoxious, nerdy artist in high school who barely graduated to working for Epic Games, anyone with enough talent, effort, and motivation can achieve their goals and dreams.

At the end of all this, it wasn't just Quake that really changed my life, my wife did. Quake gave me a direction to point in, and my wife picked me up and pushed me forward when I thought the road was closed to me.

In other words, to anyone wishing to work in the game industry, if I can do this, with enough effort and persistence so can you.


----------



## sygeek (Oct 1, 2011)

*What are the chances of your coming into being?​*


Spoiler



A little while ago I had the privilege of attending TEDx San Francisco, organized by the incomparable Christine Mason McCaull.  One of the talks was by Mel Robbins, a riotously funny self-help author and life coach with a syndicated radio show.  In it, she mentioned that scientists calculate the probability of your existing as you, today, at about one in 400 trillion (4×1014).

“That’s a pretty big number,” I thought to myself.  If I had 400 trillion pennies to my name, I could probably retire.

Previously, I had heard the Buddhist version of the probability of ‘this precious incarnation’.  Imagine there was one life preserver thrown somewhere in some ocean and there is exactly one turtle in all of these oceans, swimming underwater somewhere.  The probability that you came about and exist today is the same as that turtle sticking its head out of the water — into the middle of that life preserver.  On one try.

So I got curious: are either of these numbers correct?  Which one’s bigger?  Are they gross exaggerations?  Or is it possible that they are underestimates of the true number?

First, let us figure out the probability of one turtle sticking its head out of the one life preserver we toss out somewhere in the ocean.  That’s a pretty straightforward calculation.

According to WolframAlpha, the total area of oceans in the world is 3.409×108 square kilometers, or 340,900,000 km2 (131.6 million square miles, for those benighted souls who still cling to user-hostile British measures).  Let’s say a life preserver’s hole is about 80cm in diameter, which would make the area inside

3.14(0.4)2=0.5024 m2

which we will conveniently round to 0.5 square meters.  If one square kilometer is a million square meters, then the probability of Mr Turtle sticking his head out of that life preserver is simply the area inside the life preserver divided by the total area of all oceans, or

0.5m2/3.409×108x106m2 = 1.47 x 10-15

or one in 6.82×1014, or about 1 in 700 trillion.

One in 400 trillion vs one in 700 trillion?  I gotta say, the two numbers are pretty darn close, for such a farfetched notion from two completely different sources: old-time Buddhist scholars and present-day scientists.  They agree to within a factor of two!

So to the second question: how accurate is this number?  What would we come up with ourselves starting with first principles, making some reasonable assumptions and putting them all together?  That is, instead of making one big hand-waving gesture and pronouncing, “The answer is five hundred bazillion squintillion,” we make a series of sequentially-reasoned, smaller hand-waving gestures so as to make it all seem scientific. (This is also known as ‘consulting’ – especially if you show it all in a PowerPoint deck.)

Oh, this is going to be fun.

First, let’s talk about the probability of your parents meeting.  If they met one new person of the opposite sex every day from age 15 to 40, that would be about 10,000 people.  Let’s confine the pool of possible people they could meet to 1/10 of the world’s population twenty years go (one tenth of 4 billion = 400 million) so it considers not just the population of the US but that of the places they could have visited.  Half of those people, or 200 million, will be of the opposite sex.  So let’s say the probability of your parents meeting, ever, is 10,000 divided by 200 million:

104/2×108= 2×10-4, or one in 20,000.

*Probability of boy meeting girl: 1 in 20,000.*

So far, so unlikely.

Now let’s say the chances of them actually talking to one another is one in 10.  And the chances of that turning into another meeting is about one in 10 also.  And the chances of that turning into a long-term relationship is also one in 10.  And the chances of that lasting long enough to result in offspring is one in 2.  So the probability of your parents’ chance meeting resulting in kids is about 1 in 2000.

*Probability of same boy knocking up same girl: 1 in 2000.*

So the combined probability is already around 1 in 40 million — long but not insurmountable odds.  Now things start getting interesting.  Why?  Because we’re about to deal with eggs and sperm, which come in large numbers.

Each sperm and each egg is genetically unique because of the process of meiosis; you are the result of the fusion of one particular egg with one particular sperm.  A fertile woman has 100,000 viable eggs on average.  A man will produce about 12 trillion sperm over the course of his reproductive lifetime.  Let’s say a third of those (4 trillion) are relevant to our calculation, since the sperm created after your mom hits menopause don’t count.  So the probability of that one sperm with half your name on it hitting that one egg with the other half of your name on it is

1/(100,000)(4 trillion)= 1/(105)(4×1012)= 1 in 4 x 1017, or one in 400 quadrillion.

*Probability of right sperm meeting right egg: 1 in 400 quadrillion.*

To that, we could add the probability that the one sperm and the one egg met one another because she wasn’t in the mood, but let’s not split hairs here.  The numbers are getting plenty huge as it is.

But we’re just getting started.

Because the existence of you here now on planet earth presupposes another supremely unlikely and utterly undeniable chain of events.  Namely, that every one of your ancestors lived to reproductive age – going all the way back not just to the first Homo sapiens, first Homo erectus and Homo habilis, but all the way back to the first single-celled organism.  You are a representative of an unbroken lineage of life going back 4 billion years.

Let’s not get carried away here; we’ll just deal with the human lineage.  Say humans or humanoids have been around for about 3 million years, and that a generation is about 20 years.  That’s 150,000 generations.  Say that over the course of all human existence, the likelihood of any one human offspring to survive childhood and live to reproductive age and have at least one kid is 50:50 – 1 in 2.  Then what would be the chance of your particular lineage to have remained unbroken for 150,000 generations?

Well then, that would be one in 2150,000 , which is about 1 in 1045,000– a number so staggeringly large that my head hurts just writing it down. That number is not just larger than all of the particles in the universe – it’s larger than all the particles in the universe if each particle were itself a universe.

*Probability of every one of your ancestors reproducing successfully: 1 in 1045,000*

But let’s think about this some more.  Remember the sperm-meeting-egg argument for the creation of you, since each gamete is unique?  Well, the right sperm also had to meet the right egg to create your grandparents.  Otherwise they’d be different people, and so would their children, who would then have had children who were similar to you but not quite you.  This is also true of your grandparents’ parents, and their grandparents, and so on till the beginning of time.  If even once the wrong sperm met the wrong egg, you would not be sitting here noodling online reading fascinating articles like this one.  It would be your cousin Jethro, and you never really liked him anyway.

That means in every step of your lineage, the probability of the right sperm meeting the right egg such that the exact right ancestor would be created that would end up creating you is one in 1200 trillion, which we’ll round down to 1000 trillion, or one quadrillion.

So now we must account for that for 150,000 generations by raising 400 quadrillion to the 150,000th power:

[4x1017]150,000 ≈ 102,640,000

That’s a ten followed by 2,640,000 zeroes, which would fill 11 volumes of a book the size of mine with zeroes.

To get the final answer, technically we need to multiply that by the 1045,000 , 2000 and 20,000 up there, but those numbers are so shrimpy in comparison that it almost doesn’t matter.  For the sake of completeness:

(102,640,000)(1045,000)(2000)(20,000) = 4x 102,685,007 ≈ 102,685,000

*Probability of your existing at all: 1 in 102,685,000*

As a comparison, the number of atoms in the body of an average male (80kg, 175 lb) is 1027.  The number of atoms making up the earth is about 1050.  The number of atoms in the known universe is estimated at 1080.

So what’s the probability of your existing?  It’s the probability of 2 million people getting together – about the population of San Diego – each to play a game of dice with trillion-sided dice. They each roll the dice, and they all come up the exact same number – say, 550,343,279,001.

A miracle is an event so unlikely as to be almost impossible.  By that definition, I’ve just shown that you are a miracle.

Now go forth and feel and act like the miracle that you are.

Think about it,

Ali B

_Thanks for visiting! You can find more of my writing here and here. I also wrote a book on how smart women can find more love, which turns out to be the highest-rated of its kind on Amazon (4.9/5 stars). The book on how smart men can be more successful with women is also alright.

PS: Update 9/26/11: To all you smartypants out there who just can’t wait to tell me “the probably of existing of something that exists is 100%” and “this is all just hand-waving” — yes, Einstein, I know, and you’re totally missing the point.  The probability of sentient life is not something that can be measured accurately, and hundreds of steps have been deleted for simplicity.  It’s all an exercise to get you thinking, but some of you are so damn smart and obsessed with being right that you’ve lost the mental capacity to wonder and instead harp on the numerical accuracy of the calculation.  And no matter how you slice it, it’s pretty remarkable that you and I, self-absorbed scallywags that we are, stand at the end of an unbroken chain of life going all the way back to the primordial slime.  That’s the point.  Now if you have something interesting to say, I’ll approve the comment, otherwise into the slag-heap of trolls it goes._


----------



## nisargshah95 (Oct 2, 2011)

sygeek said:


> *How Quake changed my life forever.​*
> 
> 
> Spoiler
> ...





sygeek said:


> *What are the chances of your coming into being?​*
> 
> 
> Spoiler
> ...


The articles are really good dude. Don't stop posting them!


----------



## Nipun (Oct 2, 2011)

nisargshah95 said:


> The articles are really good dude. Don't stop posting them!




The "Quake" article was best of all... I don't have enough time to read all of them, but I try to read them whenever I get time... keep it up, sygeek!


----------



## sygeek (Oct 3, 2011)

*The Humble Programmer​*By Edsger W. Dijkstra​


Spoiler



As a result of a long sequence of coincidences I entered the programming profession officially on the first spring morning of 1952 and as far as I have been able to trace, I was the first Dutchman to do so in my country. In retrospect the most amazing thing was the slowness with which, at least in my part of the world, the programming profession emerged, a slowness which is now hard to believe. But I am grateful for two vivid recollections from that period that establish that slowness beyond any doubt.

After having programmed for some three years, I had a discussion with A. van Wijngaarden, who was then my boss at the Mathematical Centre in Amsterdam, a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become....., yes what? A programmer? But was that a respectable profession? For after all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed. Full of misgivings I knocked on van Wijngaarden's office door, asking him whether I could "speak to him for a moment"; when I left his office a number of hours later, I was another person. For after having listened to my problems patiently, he agreed that up till that moment there was not much of a programming discipline, but then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could not I be one of the persons called to make programming a respectable discipline in the years to come? This was a turning point in my life and I completed my study of physics formally as quickly as I could. One moral of the above story is, of course, that we must be very careful when we give advice to younger people; sometimes they follow it!

Another two years later, in 1957, I married and Dutch marriage rites require you to state your profession and I stated that I was a programmer. But the municipal authorities of the town of Amsterdam did not accept it on the grounds that there was no such profession. And, believe it or not, but under the heading "profession" my marriage act shows the ridiculous entry "theoretical physicist"!

So much for the slowness with which I saw the programming profession emerge in my own country. Since then I have seen more of the world, and it is my general impression that in other countries, apart from a possible shift of dates, the growth pattern has been very much the same.

Let me try to capture the situation in those old days in a little bit more detail, in the hope of getting a better understanding of the situation today. While we pursue our analysis, we shall see how many common misunderstandings about the true nature of the programming task can be traced back to that now distant past.

The first automatic electronic computers were all unique, single-copy machines and they were all to be found in an environment with the exciting flavour of an experimental laboratory. Once the vision of the automatic computer was there, its realisation was a tremendous challenge to the electronic technology then available, and one thing is certain: we cannot deny the courage of the groups that decided to try and build such a fantastic piece of equipment. For fantastic pieces of equipment they were: in retrospect one can only wonder that those first machines worked at all, at least sometimes. The overwhelming problem was to get and keep the machine in working order. The preoccupation with the physical aspects of automatic computing is still reflected in the names of the older scientific societies in the field, such as the Association for Computing Machinery or the British Computer Society, names in which explicit reference is made to the physical equipment.

What about the poor programmer? Well, to tell the honest truth: he was hardly noticed. For one thing, the first machines were so bulky that you could hardly move them and besides that, they required such extensive maintenance that it was quite natural that the place where people tried to use the machine was the same laboratory where the machine had been developed. Secondly, his somewhat invisible work was without any glamour: you could show the machine to visitors and that was several orders of magnitude more spectacular than some sheets of coding. But most important of all, the programmer himself had a very modest view of his own work: his work derived all its significance from the existence of that wonderful machine. Because that was a unique machine, he knew only too well that his programs had only local significance and also, because it was patently obvious that this machine would have a limited lifetime, he knew that very little of his work would have a lasting value. Finally, there is yet another circumstance that had a profound influence on the programmer's attitude to his work: on the one hand, besides being unreliable, his machine was usually too slow and its memory was usually too small, i.e. he was faced with a pinching shoe, while on the other hand its usually somewhat queer order code would cater for the most unexpected constructions. And in those days many a clever programmer derived an immense intellectual satisfaction from the cunning tricks by means of which he contrived to squeeze the impossible into the constraints of his equipment.

Two opinions about programming date from those days. I mention them now, I shall return to them later. The one opinion was that a really competent programmer should be puzzle-minded and very fond of clever tricks; the other opinon was that programming was nothing more than optimizing the efficiency of the computational process, in one direction or the other.

The latter opinion was the result of the frequent circumstance that, indeed, the available equipment was a painfully pinching shoe, and in those days one often encountered the naive expectation that, once more powerful machines were available, programming would no longer be a problem, for then the struggle to push the machine to its limits would no longer be necessary and that was all what programming was about, wasn't it? But in the next decades something completely different happened: more powerful machines became available, not just an order of magnitude more powerful, even several orders of magnitude more powerful. But instead of finding ourselves in the state of eternal bliss of all progamming problems solved, we found ourselves up to our necks in the software crisis! How come?

There is a minor cause: in one or two respects modern machinery is basically more difficult to handle than the old machinery. Firstly, we have got the I/O interrupts, occurring at unpredictable and irreproducible moments; compared with the old sequential machine that pretended to be a fully deterministic automaton, this has been a dramatic change and many a systems programmer's grey hair bears witness to the fact that we should not talk lightly about the logical problems created by that feature. Secondly, we have got machines equipped with multi-level stores, presenting us problems of management strategy that, in spite of the extensive literature on the subject, still remain rather elusive. So much for the added complication due to structural changes of the actual machines.

But I called this a minor cause; the major cause is... that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming had become an equally gigantic problem. In this sense the electronic industry has not solved a single problem, it has only created them, it has created the problem of using its products. To put it in another way: as the power of available machines grew by a factor of more than a thousand, society's ambition to apply these machines grew in proportion, and it was the poor programmer who found his job in this exploded field of tension between ends and means. The increased power of the hardware, together with the perhaps even more dramatic increase in its reliability, made solutions feasible that the programmer had not dared to dream about a few years before. And now, a few years later, he had to dream about them and, even worse, he had to transform such dreams into reality! Is it a wonder that we found ourselves in a software crisis? No, certainly not, and as you may guess, it was even predicted well in advance; but the trouble with minor prophets, of course, is that it is only five years later that you really know that they had been right.

Then, in the mid-sixties, something terrible happened: the computers of the so-called third generation made their appearance. The official literature tells us that their price/performance ratio has been one of the major design objectives. But if you take as "performance" the duty cycle of the machine's various components, little will prevent you from ending up with a design in which the major part of your performance goal is reached by internal housekeeping activities of doubtful necessity. And if your definition of price is the price to be paid for the hardware, little will prevent you from ending up wth a design that is terribly hard to program for: for instance the order code might be such as to enforce, either upon the progrmmer or upon the system, early binding decisions presenting conflicts that really cannot be resolved. And to a large extent these unpleasant possibilities seem to have become reality.

When these machines were announced and their functional specifications became known, quite a few among us must have become quite miserable; at least I was. It was only reasonable to expect that such machines would flood the computing community, and it was therefore all the more important that their design should be as sound as possible. But the design embodied such serious flaws that I felt that with a single stroke the progress of computing science had been retarded by at least ten years: it was then that I had the blackest week in the whole of my professional life. Perhaps the most saddening thing now is that, even after all those years of frustrating experience, still so many people honestly believe that some law of nature tells us that machines have to be that way. They silence their doubts by observing how many of these machines have been sold, and derive from that observation the false sense of security that, after all, the design cannot have been that bad. But upon closer inspection, that line of defense has the same convincing strength as the argument that cigarette smoking must be healthy because so many people do it.

It is in this connection that I regret that it is not customary for scientific journals in the computing area to publish reviews of newly announced computers in much the same way as we review scientific publications: to review machines would be at least as important. And here I have a confession to make: in the early sixties I wrote such a review with the intention of submitting it to the CACM, but in spite of the fact that the few colleagues to whom the text was sent for their advice, urged me all to do so, I did not dare to do it, fearing that the difficulties either for myself or for the editorial board would prove to be too great. This suppression was an act of cowardice on my side for which I blame myself more and more. The difficulties I foresaw were a consequence of the absence of generally accepted criteria, and although I was convinced of the validity of the criteria I had chosen to apply, I feared that my review would be refused or discarded as "a matter of personal taste". I still think that such reviews would be extremely useful and I am longing to see them appear, for their accepted appearance would be a sure sign of maturity of the computing community.

The reason that I have paid the above attention to the hardware scene is because I have the feeling that one of the most important aspects of any computing tool is its influence on the thinking habits of those that try to use it, and because I have reasons to believe that that influence is many times stronger than is commonly assumed. Let us now switch our attention to the software scene.

Here the diversity has been so large that I must confine myself to a few stepping stones. I am painfully aware of the arbitrariness of my choice and I beg you not to draw any conclusions with regard to my appreciation of the many efforts that will remain unmentioned.

In the beginning there was the EDSAC in Cambridge, England, and I think it quite impressive that right from the start the notion of a subroutine library played a central role in the design of that machine and of the way in which it should be used. It is now nearly 25 years later and the computing scene has changed dramatically, but the notion of basic software is still with us, and the notion of the closed subroutine is still one of the key concepts in programming. We should recognise the closed subroutines as one of the greatest software inventions; it has survived three generations of computers and it will survive a few more, because it caters for the implementation of one of our basic patterns of abstraction. Regrettably enough, its importance has been underestimated in the design of the third generation computers, in which the great number of explicitly named registers of the arithmetic unit implies a large overhead on the subroutine mechanism. But even that did not kill the concept of the subroutine, and we can only pray that the mutation won't prove to be hereditary.

The second major development on the software scene that I would like to mention is the birth of FORTRAN. At that time this was a project of great temerity and the people responsible for it deserve our great admiration. It would be absolutely unfair to blame them for shortcomings that only became apparent after a decade or so of extensive usage: groups with a successful look-ahead of ten years are quite rare! In retrospect we must rate FORTRAN as a successful coding technique, but with very few effective aids to conception, aids which are now so urgently needed that time has come to consider it out of date. The sooner we can forget that FORTRAN has ever existed, the better, for as a vehicle of thought it is no longer adequate: it wastes our brainpower, is too risky and therefore too expensive to use. FORTRAN's tragic fate has been its wide acceptance, mentally chaining thousands and thousands of programmers to our past mistakes. I pray daily that more of my fellow-programmers may find the means of freeing themselves from the curse of compatibility.

The third project I would not like to leave unmentioned is LISP, a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as "the most intelligent way to misuse a computer". I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.

The fourth project to be mentioned is ALGOL 60. While up to the present day FORTRAN programmers still tend to understand their programming language in terms of the specific implementation they are working with —hence the prevalence of octal and hexadecimal dumps—, while the definition of LISP is still a curious mixture of what the language means and how the mechanism works, the famous Report on the Algorithmic Language ALGOL 60 is the fruit of a genuine effort to carry abstraction a vital step further and to define a programming language in an implementation-independent way. One could argue that in this respect its authors have been so successful that they have created serious doubts as to whether it could be implemented at all! The report gloriously demonstrated the power of the formal method BNF, now fairly known as Backus-Naur-Form, and the power of carefully phrased English, a least when used by someone as brilliant as Peter Naur. I think that it is fair to say that only very few documents as short as this have had an equally profound influence on the computing community. The ease with which in later years the names ALGOL and ALGOL-like have been used, as an unprotected trade mark, to lend some of its glory to a number of sometimes hardly related younger projects, is a somewhat shocking compliment to its standing. The strength of BNF as a defining device is responsible for what I regard as one of the weaknesses of the language: an over-elaborate and not too systematic syntax could now be crammed into the confines of very few pages. With a device as powerful as BNF, the Report on the Algorithmic Language ALGOL 60 should have been much shorter. Besides that I am getting very doubtful about ALGOL 60's parameter mechanism: it allows the programmer so much combinatorial freedom, that its confident use requires a strong discipline from the programmer. Besides expensive to implement it seems dangerous to use.

Finally, although the subject is not a pleasant one, I must mention PL/1, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/1 must be like flying a plane with 7000 buttons, switches and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language —our basic tool, mind you!— already escapes our intellectual control. And if I have to describe the influence PL/1 can have on its users, the closest metaphor that comes to my mind is that of a drug. I remember from a symposium on higher level programming language a lecture given in defense of PL/1 by a man who described himself as one of its devoted users. But within a one-hour lecture in praise of PL/1. he managed to ask for the addition of about fifty new "features", little supposing that the main source of his problems could very well be that it contained already far too many "features". The speaker displayed all the depressing symptoms of addiction, reduced as he was to the state of mental stagnation in which he could only ask for more, more, more... When FORTRAN has been called an infantile disorder, full PL/1, with its growth characteristics of a dangerous tumor, could turn out to be a fatal disease.

So much for the past. But there is no point in making mistakes unless thereafter we are able to learn from them. As a matter of fact, I think that we have learned so much, that within a few years programming can be an activity vastly different from what it has been up till now, so different that we had better prepare ourselves for the shock. Let me sketch for you one of the posssible futures. At first sight, this vision of programming in perhaps already the near future may strike you as utterly fantastic. Let me therefore also add the considerations that might lead one to the conclusion that this vision could be a very real possibility.

The vision is that, well before the seventies have run to completion, we shall be able to design and implement the kind of systems that are now straining our programming ability, at the expense of only a few percent in man-years of what they cost us now, and that besides that, these systems will be virtually free of bugs. These two improvements go hand in hand. In the latter respect software seems to be different from many other products, where as a rule a higher quality implies a higher price. Those who want really reliable software will discover that they must find means of avoiding the majority of bugs to start with, and as a result the programming process will become cheaper. If you want more effective programmers, you will discover that they should not waste their time debugging, they should not introduce the bugs to start with. In other words: both goals point to the same change.

Such a drastic change in such a short period of time would be a revolution, and to all persons that base their expectations for the future on smooth extrapolation of the recent past —appealing to some unwritten laws of social and cultural inertia— the chance that this drastic change will take place must seem negligible. But we all know that sometimes revolutions do take place! And what are the chances for this one?

There seem to be three major conditions that must be fulfilled. The world at large must recognize the need for the change; secondly the economic need for it must be sufficiently strong; and, thirdly, the change must be technically feasible. Let me discuss these three conditions in the above order.

With respect to the recognition of the need for greater reliability of software, I expect no disagreement anymore. Only a few years ago this was different: to talk about a software crisis was blasphemy. The turning point was the Conference on Software Engineering in Garmisch, October 1968, a conference that created a sensation as there occured the first open admission of the software crisis. And by now it is generally recognized that the design of any large sophisticated system is going to be a very difficult job, and whenever one meets people responsible for such undertakings, one finds them very much concerned about the reliability issue, and rightly so. In short, our first condition seems to be satisfied.

Now for the economic need. Nowadays one often encounters the opinion that in the sixties programming has been an overpaid profession, and that in the coming years programmer salaries may be expected to go down. Usually this opinion is expressed in connection with the recession, but it could be a symptom of something different and quite healthy, viz. that perhaps the programmers of the past decade have not done so good a job as they should have done. Society is getting dissatisfied with the performance of programmers and of their products. But there is another factor of much greater weight. In the present situation it is quite usual that for a specific system, the price to be paid for the development of the software is of the same order of magnitude as the price of the hardware needed, and society more or less accepts that. But hardware manufacturers tell us that in the next decade hardware prices can be expected to drop with a factor of ten. If software development were to continue to be the same clumsy and expensive process as it is now, things would get completely out of balance. You cannot expect society to accept this, and therefore we must learn to program an order of magnitude more effectively. To put it in another way: as long as machines were the largest item on the budget, the programming profession could get away with its clumsy techniques, but that umbrella will fold rapidly. In short, also our second condition seems to be satisfied.

And now the third condition: is it technically feasible? I think it might and I shall give you six arguments in support of that opinion.

A study of program structure had revealed that programs —even alternative programs for the same task and with the same mathematical content— can differ tremendously in their intellectual manageability. A number of rules have been discovered, violation of which will either seriously impair or totally destroy the intellectual manageability of the program. These rules are of two kinds. Those of the first kind are easily imposed mechanically, viz. by a suitably chosen programming language. Examples are the exclusion of goto-statements and of procedures with more than one output parameter. For those of the second kind I at least —but that may be due to lack of competence on my side— see no way of imposing them mechanically, as it seems to need some sort of automatic theorem prover for which I have no existence proof. Therefore, for the time being and perhaps forever, the rules of the second kind present themselves as elements of discipline required from the programmer. Some of the rules I have in mind are so clear that they can be taught and that there never needs to be an argument as to whether a given program violates them or not. Examples are the requirements that no loop should be written down without providing a proof for termination nor without stating the relation whose invariance will not be destroyed by the execution of the repeatable statement.

I now suggest that we confine ourselves to the design and implementation of intellectually manageable programs. If someone fears that this restriction is so severe that we cannot live with it, I can reassure him: the class of intellectually manageable programs is still sufficiently rich to contain many very realistic programs for any problem capable of algorithmic solution. We must not forget that it is not our business to make programs, it is our business to design classes of computations that will display a desired behaviour. The suggestion of confining ourselves to intellectually manageable programs is the basis for the first two of my announced six arguments.

Argument one is that, as the programmer only needs to consider intellectually manageable programs, the alternatives he is choosing between are much, much easier to cope with.

Argument two is that, as soon as we have decided to restrict ourselves to the subset of the intellectually manageable programs, we have achieved, once and for all, a drastic reduction of the solution space to be considered. And this argument is distinct from argument one.

Argument three is based on the constructive approach to the problem of program correctness. Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer's burden. On the contrary: the programmer should let correctness proof and program grow hand in hand. Argument three is essentially based on the following observation. If one first asks oneself what the structure of a convincing proof would be and, having found this, then constructs a program satisfying this proof's requirements, then these correctness concerns turn out to be a very effective heuristic guidance. By definition this approach is only applicable when we restrict ourselves to intellectually manageable programs, but it provides us with effective means for finding a satisfactory one among these.

Argument four has to do with the way in which the amount of intellectual effort needed to design a program depends on the program length. It has been suggested that there is some kind of law of nature telling us that the amount of intellectual effort needed grows with the square of program length. But, thank goodness, no one has been able to prove this law. And this is because it need not be true. We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called "abstraction"; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. Of course I have tried to find a fundamental cause that would prevent our abstraction mechanisms from being sufficiently effective. But no matter how hard I tried, I did not find such a cause. As a result I tend to the assumption —up till now not disproved by experience— that by suitable application of our powers of abstraction, the intellectual effort needed to conceive or to understand a program need not grow more than proportional to program length. But a by-product of these investigations may be of much greater practical significance, and is, in fact, the basis of my fourth argument. The by-product was the identification of a number of patterns of abstraction that play a vital role in the whole process of composing programs. Enough is now known about these patterns of abstraction that you could devote a lecture to about each of them. What the familiarity and conscious knowledge of these patterns of abstraction imply dawned upon me when I realized that, had they been common knowledge fifteen years ago, the step from BNF to syntax-directed compilers, for instance, could have taken a few minutes instead of a few years. Therefore I present our recent knowledge of vital abstraction patterns as the fourth argument.

Now for the fifth argument. It has to do with the influence of the tool we are trying to use upon our own thinking habits. I observe a cultural tradition, which in all probability has its roots in the Renaissance, to ignore this influence, to regard the human mind as the supreme and autonomous master of its artefacts. But if I start to analyse the thinking habits of myself and of my fellow human beings, I come, whether I like it or not, to a completely different conclusion, viz. that the tools we are trying to use and the language or notation we are using to express or record our thoughts, are the major factors determining what we can think or express at all! The analysis of the influence that programming languages have on the thinking habits of its users, and the recognition that, by now, brainpower is by far our scarcest resource, they together give us a new collection of yardsticks for comparing the relative merits of various programming languages. The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. In the case of a well-known conversational programming language I have been told from various sides that as soon as a programming community is equipped with a terminal for it, a specific phenomenon occurs that even has a well-established name: it is called "the one-liners". It takes one of two different forms: one programmer places a one-line program on the desk of another and either he proudly tells what it does and adds the question "Can you code this in less symbols?" —as if this were of any conceptual relevance!— or he just asks "Guess what it does!". From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz. to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language. Another lesson we should have learned from the recent past is that the development of "richer" or "more powerful" programming languages was a mistake in the sense that these baroque monstrosities, these conglomerations of idiosyncrasies, are really unmanageable, both mechanically and mentally. I see a great future for very systematic and very modest programming languages. When I say "modest", I mean that, for instance, not only ALGOL 60's "for clause", but even FORTRAN's "DO loop" may find themselves thrown out as being too baroque. I have run a a little programming experiment with really experienced volunteers, but something quite unintended and quite unexpected turned up. None of my volunteers found the obvious and most elegant solution. Upon closer analysis this turned out to have a common source: their notion of repetition was so tightly connected to the idea of an associated controlled variable to be stepped up, that they were mentally blocked from seeing the obvious. Their solutions were less efficient, needlessly hard to understand, and it took them a very long time to find them. It was a revealing, but also shocking experience for me. Finally, in one respect one hopes that tomorrow's programming languages will differ greatly from what we are used to now: to a much greater extent than hitherto they should invite us to reflect in the structure of what we write down all abstractions needed to cope conceptually with the complexity of what we are designing. So much for the greater adequacy of our future tools, which was the basis of the fifth argument.

As an aside I would like to insert a warning to those who identify the difficulty of the programming task with the struggle against the inadequacies of our current tools, because they might conclude that, once our tools will be much more adequate, programming will no longer be a problem. Programming will remain very difficult, because once we have freed ourselves from the circumstantial cumbersomeness, we will find ourselves free to tackle the problems that are now well beyond our programming capacity.

You can quarrel with my sixth argument, for it is not so easy to collect experimental evidence for its support, a fact that will not prevent me from believing in its validity. Up till now I have not mentioned the word "hierarchy", but I think that it is fair to say that this is a key concept for all systems embodying a nicely factored solution. I could even go one step further and make an article of faith out of it, viz. that the only problems we can really solve in a satisfactory manner are those that finally admit a nicely factored solution. At first sight this view of human limitations may strike you as a rather depressing view of our predicament, but I don't feel it that way, on the contrary! The best way to learn to live with our limitations is to know them. By the time that we are sufficiently modest to try factored solutions only, because the other efforts escape our intellectual grip, we shall do our utmost best to avoid all those interfaces impairing our ability to factor the system in a helpful way. And I cannot but expect that this will repeatedly lead to the discovery that an initially untractable problem can be factored after all. Anyone who has seen how the majority of the troubles of the compiling phase called "code generation" can be tracked down to funny properties of the order code, will know a simple example of the kind of things I have in mind. The wider applicability of nicely factored solutions is my sixth and last argument for the technical feasibiilty of the revolution that might take place in the current decade.

In principle I leave it to you to decide for yourself how much weight you are going to give to my considerations, knowing only too well that I can force no one else to share my beliefs. As each serious revolution, it will provoke violent opposition and one can ask oneself where to expect the conservative forces trying to counteract such a development. I don't expect them primarily in big business, not even in the computer business; I expect them rather in the educational institutions that provide today's training and in those conservative groups of computer users that think their old programs so important that they don't think it worth-while to rewrite and improve them. In this connection it is sad to observe that on many a university campus the choice of the central computing facility has too often been determined by the demands of a few established but expensive applications with a disregard of the question how many thousands of "small users" that are willing to write their own programs were going to suffer from this choice. Too often, for instance, high-energy physics seems to have blackmailed the scientific community with the price of its remaining experimental equipment. The easiest answer, of course, is a flat denial of the technical feasibility, but I am afraid that you need pretty strong arguments for that. No reassurance, alas, can be obtained from the remark that the intellectual ceiling of today's average programmer will prevent the revolution from taking place: with others programming so much more effectively, he is liable to be edged out of the picture anyway.

There may also be political impediments. Even if we know how to educate tomorrow's professional programmer, it is not certain that the society we are living in will allow us to do so. The first effect of teaching a methodology —rather than disseminating knowledge— is that of enhancing the capacities of the already capable, thus magnifying the difference in intelligence. In a society in which the educational system is used as an instrument for the establishment of a homogenized culture, in which the cream is prevented from rising to the top, the education of competent programmers could be politically impalatable.

Let me conclude. Automatic computers have now been with us for a quarter of a century. They have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind. Hierarchical systems seem to have the property that something considered as an undivided entity on one level, is considered as a composite object on the next lower level of greater detail; as a result the natural grain of space or time that is applicable at each level decreases by an order of magnitude when we shift our attention from one level to the next lower one. We understand walls in terms of bricks, bricks in terms of crystals, crystals in terms of molecules etc. As a result the number of levels that can be distinguished meaningfully in a hierarchical system is kind of proportional to the logarithm of the ratio between the largest and the smallest grain, and therefore, unless this ratio is very large, we cannot expect many levels. In computer programming our basic building block has an associated time grain of less than a microsecond, but our program may take hours of computation time. I do not know of any other technology covering a ratio of 1010 or more: the computer, by virtue of its fantastic speed, seems to be the first to provide us with an environment where highly hierarchical artefacts are both possible and necessary. This challenge, viz. the confrontation with the programming task, is so unique that this novel experience can teach us a lot about ourselves. It should deepen our understanding of the processes of design and creation, it should give us better control over the task of organizing our thoughts. If it did not do so, to my taste we should not deserve the computer at all!

It has already taught us a few lessons, and the one I have chosen to stress in this talk is the following. We shall do a much better programming job, provided that we approach the task with a full appreciation of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers.


----------



## sygeek (Oct 5, 2011)

*I Think Your App Should Be Free​*By Joey Flores​


Spoiler



*blog.earbits.com/online_radio/wp-content/uploads/2011/10/App-Pirates.png​
Well, you’ve done it.  After $15,000 invested and six months of slaving away with 3 of your hacker buddies, you’ve launched your awesome-sauce new app in the Android app store.  It is truly a thing of marvel.  If the first day’s downloads are any indicator, your $0.99 app is going to make you and your friends a cool $50,000 the first year and some straggler dollars for years to come.  You’ve got app idea #2 brewing and this is the beginning of something good.  You all toast to your hard work, stay up late watching the first day download totals, and dollars, adding up, and go to bed exhausted and happy.

*What the hell?  That’s Our App!*

What, no champagne with breakfast?  Clearly you should be riding high on the success of your application’s immense day two downloads!  But no, you wake up to see that your numbers are flat.  You search Google for the name of your app and, lo and behold, you find an app alright… your awesome FREE app, uploaded by another user to a shady black market app store with your app name in the description, and it’s getting 100 times the downloads that your paid app got, and climbing.

Your app has been cracked and uploaded for free.  It ranks higher in Google than your app does.  The whole world is linking to it.

You contact the app store furious.  You manage to have it taken down, but every day, every ****ING DAY, there is another cracked and free version of your app in this slimy app store.  There it is, again, available for free, and your paid app in the real retailer is a stagnant pile of code being ignored.

You are forced to play police every day.  You find the next cracked version of your app on shady site #132 and report it.  You scream to high heavens at the people from the app store.  Why can’t they do a better job of making sure copycat apps don’t make it into the store?  These thiefs, err…sorry, pirates…errr…whatever, are getting all the downloads.  Nobody is buying your app…and yet, there is such clear demand.

They’re doing the best they can, they say.  Most of all, they’re complying with the law, they say.

But every day, your app is in the free store.  Poor users don’t even know they’re downloading something they’re not supposed to.  I mean, who’s to understand these unclear laws or know which sites are legal and which are not?  Pooooor users.

*Information Wants to Be Free!*

After ranting endlessly on Hacker News and the like, finally the person who keeps stealing your app posts a reply.

They think all apps should be free.

It’s not stealing, they’re just giving away *copies*.

Your code is still there for you to do with as you please.  Nobody has stolen it.  You’ve got your original and can do whatever you want with it.

You plead with them.  You spent your own money, and that of an investor’s, making this app.  You want and need to recoup your expenses or nobody will invest in you again.

The reply?  They don’t like your VC.

Your VC has a long history of screwing over entrepreneurs and they don’t want to see them make any money.  Only a fraction goes to you anyway.  It’s really the VC who’s losing out, and screw them.  They’ve been known to patent troll and stop innovation.  Your VC is evil.

Fine!  Maybe the VC isn’t a friend to consumers or their own portfolio, but that isn’t your fault, and this is your app!  You put everything you had into it and, look, the downloads are now in the millions.  The FREE downloads.

Isn’t it better to be known for creating a cool app that you didn’t make money from than making a few bucks and remaining obscure, they ask.

You tell them that’s your choice to make, but they don’t think it is.

They tell you your business model is broken.  You should make money some other way.  Maybe you should sell t-shirts with your company’s name on them, or put on events of some kind and charge for tickets.  That’s where the real money is.  Paid apps are a thing of the past, they say.

Look to the future.


----------



## sygeek (Oct 6, 2011)

*The Steve Jobs I Knew​*By Walt Mossberg​


Spoiler



*allthingsd.com/files/2011/10/walt_and_steve-380x253.png​
That Steve Jobs was a genius, a giant influence on multiple industries and billions of lives, has been written many times since he retired as Apple’s CEO in August. He was a historical figure on the scale of a Thomas Edison or a Henry Ford, and set the mold for many other corporate leaders in many other industries.

He did what a CEO should: Hired and inspired great people; managed for the long term, not the quarter or the short-term stock price; made big bets and took big risks. He insisted on the highest product quality and on building things to delight and empower actual users, not intermediaries like corporate IT directors or wireless carriers. And he could sell. Man, he could sell.

As he liked to say, he lived at the intersection of technology and liberal arts.

But there was a more personal side of Steve Jobs, of course, and I was fortunate enough to see a bit of it, because I spent hours in conversation with him, over the 14 years he ran Apple. Since I am a product reviewer, and not a news reporter charged with covering the company’s business, he felt a bit more comfortable talking to me about things he might not have said to most other journalists.

Even in his death, I won’t violate the privacy of those conversations. But here are a few stories that illustrate the man as I knew him.

*The Phone Calls*

I never knew Steve when he was first at Apple. I wasn’t covering technology then. And I only met him once, briefly, between his stints at the company. But, within days of his return, in 1997, he began calling my house, on Sunday nights, for four or five straight weekends. As a veteran reporter, I understood that part of this was an attempt to flatter me, to get me on the side of a teetering company whose products I had once recommended, but had, more recently, advised readers to avoid.

Yet there was more to the calls than that. They turned into marathon, 90-minute, wide-ranging, off-the-record discussions that revealed to me the stunning breadth of the man. One minute he’d be talking about sweeping ideas for the digital revolution. The next about why Apple’s current products were awful, and how a color, or angle, or curve, or icon was embarrassing.

After the second such call, my wife became annoyed at the intrusion he was making in our weekend. I didn’t.

Later, he’d sometimes call to complain about some reviews, or parts of reviews — though, in truth, I felt very comfortable recommending most of his products for the average, non-techie consumers at whom I aim my columns. (That may have been because they were his target, too.) I knew he would be complaining because he’d start every call by saying “Hi, Walt. I’m not calling to complain about today’s column, but I have some comments, if that’s okay.” I usually disagreed with his comments, but that was okay, too.

*The Product Unveilings*

Sometimes, not always, he’d invite me in to see certain big products before he unveiled them to the world. He may have done the same with other journalists. We’d meet in a giant boardroom, with just a few of his aides present, and he’d insist — even in private — on covering the new gadgets with cloths and then uncovering them like the showman he was, a gleam in his eye and passion in his voice. We’d then often sit down for a long, long discussion of the present, the future, and general industry gossip.

I still remember the day he showed me the first iPod. I was amazed that a computer company would branch off into music players, but he explained, without giving any specifics away, that he saw Apple as a digital products company, not a computer company. It was the same with the iPhone, the iTunes music store, and later the iPad, which he asked me to his home to see, because he was too ill at the time to go to the office.

*The Slides*

To my knowledge, the only tech conference Steve Jobs regularly appeared at, the only event he didn’t somehow control, was our D: All Things Digital conference, where he appeared repeatedly for unrehearsed, onstage interviews. We had one rule that really bothered him: We never allowed slides, which were his main presentation tool.

One year, about an hour before his appearance, I was informed that he was backstage preparing dozens of slides, even though I had reminded him a week earlier of the no-slides policy. I asked two of his top aides to tell him he couldn’t use the slides, but they each said they couldn’t do it, that I had to. So, I went backstage and told him the slides were out. Famously prickly, he could have stormed out, refused to go on. And he did try to argue with me. But, when I insisted, he just said “Okay.” And he went on stage without them, and was, as usual, the audience’s favorite speaker.

*Ice Water in Hell*

For our fifth *D* conference, both Steve and his longtime rival, the brilliant Bill Gates, surprisingly agreed to a joint appearance, their first extended onstage joint interview ever. But it almost got derailed.

Earlier in the day, before Gates arrived, I did a solo onstage interview with Jobs, and asked him what it was like to be a major Windows developer, since Apple’s iTunes program was by then installed on hundreds of millions of Windows PCs.

He quipped: “It’s like giving a glass of ice water to someone in Hell.” When Gates later arrived and heard about the comment, he was, naturally, enraged, because my partner Kara Swisher and I had assured both men that we hoped to keep the joint session on a high plane.

In a pre-interview meeting, Gates said to Jobs: “So I guess I’m the representative from Hell.” Jobs merely handed Gates a cold bottle of water he was carrying. The tension was broken, and the interview was a triumph, with both men acting like statesmen. When it was over, the audience rose in a standing ovation, some of them in tears.

*The Optimist
*
I have no way of knowing how Steve talked to his team during Apple’s darkest days in 1997 and 1998, when the company was on the brink and he was forced to turn to archrival Microsoft for a rescue. He certainly had a nasty, mercurial side to him, and I expect that, then and later, it emerged inside the company and in dealings with partners and vendors, who tell believable stories about how hard he was to deal with.

But I can honestly say that, in my many conversations with him, the dominant tone he struck was optimism and certainty, both for Apple and for the digital revolution as a whole. Even when he was telling me about his struggles to get the music industry to let him sell digital songs, or griping about competitors, at least in my presence, his tone was always marked by patience and a long-term view. This may have been for my benefit, knowing that I was a journalist, but it was striking nonetheless.

At times in our conversations, when I would criticize the decisions of record labels or phone carriers, he’d surprise me by forcefully disagreeing, explaining how the world looked from their point of view, how hard their jobs were in a time of digital disruption, and how they would come around.

This quality was on display when Apple opened its first retail store. It happened to be in the Washington, D.C., suburbs, near my home. He conducted a press tour for journalists, as proud of the store as a father is of his first child. I commented that, surely, there’d only be a few stores, and asked what Apple knew about retailing.

He looked at me like I was crazy, said there’d be many, many stores, and that the company had spent a year tweaking the layout of the stores, using a mockup at a secret location. I teased him by asking if he, personally, despite his hard duties as CEO, had approved tiny details like the translucency of the glass and the color of the wood.

He said he had, of course.

*The Walk*

After his liver transplant, while he was recuperating at home in Palo Alto, California, Steve invited me over to catch up on industry events that had transpired during his illness. It turned into a three-hour visit, punctuated by a walk to a nearby park that he insisted we take, despite my nervousness about his frail condition.

He explained that he walked each day, and that each day he set a farther goal for himself, and that, today, the neighborhood park was his goal. As we were walking and talking, he suddenly stopped, not looking well. I begged him to return to the house, noting that I didn’t know CPR and could visualize the headline: “Helpless Reporter Lets Steve Jobs Die on the Sidewalk.”

But he laughed, and refused, and, after a pause, kept heading for the park. We sat on a bench there, talking about life, our families, and our respective illnesses (I had had a heart attack some years earlier). He lectured me about staying healthy. And then we walked back.

Steve Jobs didn’t die that day, to my everlasting relief. But now he really is gone, much too young, and it is the world’s loss.

*Editors Note*: Here is a video of Walt talking about that walk with Jobs:

*i.eho.st/pp4xek92.png


----------



## sygeek (Oct 8, 2011)

*Steve Jobs, Atari Employee Number 40 ​*By Frank Cifaldi
​


Spoiler



*www.gamasutra.com/db_area/images/news2001/37762/stevejobsold.jpg​
Steve Jobs was called many things during his tragically short life -- innovator, entrepreneur, leader, father -- but back when he showed up at the Los Gatos doorstep of arcade game leader Atari in May of 1974, he was an unwashed, bearded college dropout more interested in scoring some acid than changing the world.

As Atari alumni and Pong engineer Al Alcorn tells it, it was a pretty typical day at the company's then-modest warehouse digs -- walls lined with Pong and Pong-like cabinets, barefoot technicians reeking of pot after some early afternoon hot boxing -- when personnel handler Penny Chapler came into his office.

"We've got this kid in the lobby," Alcorn recalls her saying. "He's either got something or is a crackpot."

By this time Alcorn was used to unkempt guys wandering into the office looking to make some bread. In the greater Los Gatos area, engineers saw Atari as the cool place to work: there was no dress code, your bosses didn't care what you did in your offtime, and working on games was way better than the televisions and industrial equipment you might touch your soldering iron to at other companies.

"He was this real scuzzy kid," Alcorn once told video game historian Steven Kent. "I think I said, 'We should either call the cops or we should talk to him.' So I talked to him."

Jobs had no real engineering experience to bring to the table. He had a small amount of education from Reed College, but it was in a completely unrelated major, and he had dropped out early. But he had a way with words, seemed to have a passion for technology, and probably lied about having worked at Hewlett-Packard.

"I figured, this guy's gotta be cheap, man. He really doesn't have much skills at all," Alcorn remembers. "So I figured I'd hire him."

*A Diet Of Air And Water*

Jobs was hired as Atari employee #40, as a technician fixing up and tweaking circuit board designs. One of his first roles was finishing the technical design of Touch Me, a simple arcade memory game similar to Ralph Baer's later Simon toy. He more than likely helped out on other games that year, such as racer Gran Trak 20 and the odd experiment Puppy Pong.

But the young, abrasive Jobs didn't fit in. As the various stories go, complaints ranged from poor hygiene to an abrasive attitude to strange dietary habits.

"He says if I pass out, just push me onto the workbench. Don't call 911 or anything. I'm on this new diet of just air and water," Alcorn recently recalled (though the story sometimes involves a jar of cranberry juice).

Though he didn't have much personal interaction with him at the time, Atari co-founder Nolan Bushnell remembers the young Jobs as a "brilliant, curious and aggressive" young man, though very abrasive as well. Though Jobs would come to be praised as a brash, firm leader, at 18 this quality manifested itself in a negative way, making several enemies at the company by openly mocking them and treating them like they were idiots. Despite this, he was a promising employee, so Atari found a way to keep him on board.

"I always felt to run a good company you had to have room for everybody -- you could always figure out a way to make room for smart people," Bushnell recently recalled. "So, we decided to have a night shift in engineering -- he was the only one in it."

*Spiritual Research*

After about five or six months of saving money and working the night shift (often inviting friend, collaborator and eventual Apple co-founder Steve Wozniak into the office to help him with engineering challenges), Jobs approached Alcorn to let him know he was quitting the company to go to India, meet his guru, and conduct what he referred to as "spiritual research."

Alcorn turned his trip into an opportunity for the company: Atari's German distributors were having trouble assembling the games due to a problem with the country's incompatible power supplies. It was a relatively simple fix, but Alcorn's attempts to troubleshoot long-distance were proving fruitless. He needed someone out there to show them how to fix the problem.

"I said Steve I'll cut you a deal. I'll give you a one-way ticket to Germany -- it's gotta be cheaper to get to India from Germany than it is from here -- if you'll do a day or two of work over in Germany for me," Alcorn recently recalled.

As it turned out it would have been cheaper to fly out of California, but they didn't know that at the time, so Jobs accepted. He flew out and though he was able to fix the problem, it wasn't a joyous business trip for either party involved: vegetarian Jobs struggled to eat in the "meat and potatoes" country, and Atari's German distributors didn't know what to make of the odd foreigner.

"He wasn't dressed appropriately, he didn't behave appropriately," Alcorn remembers. "The Germans were horrified at this."

From there, Jobs went on to India as planned (there do not appear to be any historical accounts of what exactly he did during his trip, though "backpacking" and "acid" are common words used by those who have recounted it). He returned to Atari several months later with a shaved head, saffron robes, and a copy of Be Here Now for Alcorn, asking for his old job back.

"Apparently, he had hepatitis or something and had to get out of India before he died," Alcorn told historian Steven Kent. "I put him to work again. That's when the famous story about Breakout took place."

*Jobs and Woz Break Out*

As the story goes, Atari suddenly found itself facing competition in the arcade video game industry it created, most of it from former Atari engineers who struck out on their own, stolen parts and plans in tow. No longer able to survive on various iterations of Pong, the company designed a single-player game called Breakout, which saw players bouncing a ball vertically to destroy a series of bricks at the top of the screen.

The game was prototyped, though the number of TTL chips used would have made manufacturing expensive. The company offered a bounty to whoever was up to the task of reducing its chip count: the exact numbers seem to have become muddled throughout history, but the general consensus among those who are there said that the company offered $100 for each chip successfully removed from the design, with a bonus if the total chip count went below a certain number. The young Jobs, who in retrospect comes across as an excellent liar, somehow won the bid for the project.

"Jobs never did a lick of engineering in his life. He had me snowed," Alcorn later recalled. "It took years before I figured out that he was getting Woz to 'come in the back door' and do all the work while he got the credit."

Jobs convinced Wozniak to work on the game during his day job at Hewlett-Packard, when he was meant to be designing calculators. At night the two would collaborate on building it at Atari: Wozniak as engineer, Jobs as breadboarder and tester.

Allegedly, Jobs told Wozniak that he could have half of a $700 bounty if they were able to get the chip count under 50 (typical games of the day tended to require around 100 chips). After four sleepless days that gave both of them a case of mono (an artificial time limit, it turns out: Jobs had a plane to catch, Atari wasn't in that much of a rush), the brilliantly gifted Wozniak delivered a working board with just 46 chips.

Jobs made good on his promise and gave Wozniak his promised $350. What he didn't tell him -- and what Wozniak didn't find out until several years later -- was that Jobs also pocketed a bonus somewhere in the neighborhood of $5,000. Though it's often reported that this caused a rift in their friendship, Wozniak seems to have no hard feelings.

"The money's irrelevant -- and it was then. I would have done it for free," he said in a recent interview. "I was happy to be able to design a video game that people would actually play. I think Steve needed money and just didn't tell me the truth. If he'd told me the truth, he'd have gotten it."

*The Forbidden Fruit*

As this was going on, Jobs and Wozniak were designing a personal home computer during their offtime, which would eventually become the Apple I. Even Alcorn himself got involved, unofficially.

"I helped them with parts, I helped them design it. It was a cool engineering project, but it seemed [like it would] make no money," he recalled.

"He offered the Apple II to Atari ... we said no. No thank you. But I liked him. He was a nice guy. So I introduced him to venture capitalists."

Jobs and Atari soon parted ways, and Apple Computer was formed on April 1, 1976. The rest, as they say, is history.


----------



## Vyom (Oct 10, 2011)

*What is it about Steve Jobs?*​​
By Anisha Oommen | The Water Cooler

*What is it that makes millions of people who have never met him, mourn his passing? What makes them feel like they knew him.  What made his death so personal.*​


Spoiler



In an outpouring from the far corners of the world, people appear to share a personal connection with him. A friend of mine called it the 'Unites States of Apple.' Grief creates a leveling platform, where country, language and economics become irrelevant, and shared loss reminds us again of how much we have really in common.

The word "inspiration" keeps re-surfacing. We see our potential in him. A post on Twitter captured it — 



> "*Jobs was born out of wedlock, put up for adoption, dropped out of college, and still, he changed the world. What's your excuse?*" -
> - Twitter



Is that what we see reflected in him, is that what unites digital titans and everyday gadget fans like you and me as we mourn him? 

Messages from Apple fans across the world echo the same message — he created dreams for people, he made them believe in themselves, believe that they too could reach for greatness.

From Paris, Russia and Munich. From Sydney and Tokyo, from teachers to entrepreneurs, fans pay tribute to their hero. From Shanghai, voices express concern over the future of Apple, without the leadership of its iconic guide. From India, the Kanchi Dham temple in Uttarakhand pays its respects to the man who visited them at the young age of 18, in search of enlightenment.

From Hong Kong, nineteen year old Jonathon Mak's homage to Jobs went viral on the internet. His design incorporates Steve Jobs' silhouette into the bite of the Apple logo. Tribute in ingenious simplicity. He says, "I just wanted it to be a very quiet commemoration. It's just this quiet realization that Apple is now missing a piece. It's just kind of implying his absence."

Jobs' family recognized what he meant to the public, and that many will mourn with them. They are building a website where people can share their memories of Jobs. In the meanwhile though, Apple is collecting thoughts, memories and condolences at an e-mail address it has set up: rememberingsteve@apple.com.

*Jobs may be gone, but he will live online.*


----------



## KDroid (Oct 18, 2011)

^^

Loved this line:



> *"Jobs was born out of wedlock, put up for adoption, dropped out of college, and still, he changed the world. What's your excuse?"*


----------



## sygeek (Oct 21, 2011)

*The case for piracy​*​By Brett Elliott​


Spoiler



_When it comes to copyright theft and piracy, many people assume there's just one side - the side of truth, justice and copyright owners. Beyond that there are parasitical thieves. When most governments come to legislate on the matter, their response is usually one of listening to what big corporations and lobby groups say and nodding in agreement. For the general public, years of being bombarded by cross platform marketing campaigns have ingrained people with various "Piracy bad. Copyright good" slogans._

We've been deluged with the arguments against piracy for years. But what's the other side of the story? Could it possibly be that copyright infringers and pirates aren't always the bad guys? Are copyright owners their own worst enemy? Judge for yourself and tell us what you think.

*Contempt for customers*

We'll start with an area that many reading this can relate to. Commercial media's contempt for its audience. These are some examples which touched me and they may ring bells for you.

Gladiator, Channel 10. We're back from a commercial break. Rusty arrives back at home in Spain to find his wife and son raped and crucified. It's arguably the most touching scene of the whole movie. What better time for a giant cartoon helicopter to fly around the screen announcing, "Don't forget, Merrick and Rosso! The B-Team! Every Wednesday night at 7.30!"

I remember every syllable of that ad. Positioning ads like this is, Gruen has told us, is most effective as we're at our most vulnerable. But at the same time this was like the network raising its middle finger at the us and yelling, "Lap it up, suckers!" But is there a way to treat your audience with any more contempt?

I think Channel 7 managed it. Remember the TV show Lost? The first series had a huge buzz about it - largely from making huge waves in America, weeks beforehand. Many people downloaded the series from the US as it aired. I stuck with Channel 7 for some kind of local solidarity reasons. The anticipation coming up to the final, 24th episode revolved around the big reveal, "What's under the hatch?" Then, after watching religiously week after week, there was an unexplained six week hiatus. Six weeks! Again, I restrained myself from downloading the final episodes and stuck with 7. Finally, the show reappeared. Then, in the very first ad slot of the very first ad break there came the trailer, "Don't forget to keep watching the final episodes of Lost [as if!] when we show you what's under the hatch!"

Then they showed us what was under the hatch. Right there and then.

I won't tell you what I shrieked at the TV. But perhaps you can imagine. As spoilers go, that was huge. That was the last episode of Lost I watched... On Channel 7.

This happens all the time. Channel 11, the other day, came back from a five minute ad break to show the last ten seconds of a Simpsons episode. Ten seconds! But I think the 'abject contempt to its viewers award' must go to Channel 9.

I could regale you for ages with my Channel 9 rage. Yet I keep finding myself watching movies which are butchered by having five-minutes-on-five-minutes-off ads at the end. Using Tivo to buffer programs for an hour before watching - so that I can skip through the ads - is one way round it. Of course, this forces 9 to use in-program display ads to make up the revenue. Somehow I don't care. Because there are two areas where 9's actions are the scheduling equivalent of dropping a turd on my doorstep.

*Sporting events*

I remember my dad ringing up from the UK and remarking how excellent and exciting the Melbourne Commonwealth Games were. Discussion in the office had confirmed that I wasn't the only person who found 9's delayed and appalling coverage unwatchable. It's been the same for subsequent Commonwealth Games and the Olympics. If you could watch events Live on the internet, wouldn't you? There's no other legal way to watch most of them Live (if at all) in Australia.

Did you want to watch all of the matches in the Rugby World Cup? Must have sucked how 9 bought the rights and then DIDN'T SHOW THE MATCHES LIVE! For those who knew what they were doing, you could watch them free on the internet. What other option did they have?

*The English Premiership*

Nine's treatment of sport is a local problem. Globally, the big issue is English Soccer. The rights are managed by Sky TV (The UK's equivalent to Foxtel). To be fair, the money Sky pumped into the sport, plus the huge improvements in coverage, is one of the reasons this is the most popular league in world sport. But for those of us who had little money, we'd rather be in a position to actually watch a game on TV than know that only the moneyed people had access to the improved coverage. There was the option of traipsing down the pub, but that meant coming home most-likely drunk, reeking of cigarette smoke (before the ban) and still having spent money. But the real problem was this...

You'd ring up Sky. "Hi, I'd like to subscribe to Sky to watch the football please?"

"Certainly, which football do you want?"

"The Premiership football."

Certainly, it's available on this package, that package and the other package."

"No I just want the football. I don't want the US Soap channel, the African Animal Channel, the Infomercial Channel... etc etc etc."

Indeed, Sky spreads its Premiership games across several channels in several packages so you have to subscribe to all their other crap in order to get the few football matches that you want to pay for. The resulting monthly fee is well over a hundred dollars. Even to watch the odd pay per view game you have to pay for Sky and then pay for a package in order to pay for the pay per view.

Or... you can just watch it live on the internet. For free.

In Australia, it's a similar problem. But I'm not subscribing to Foxtel just to watch my team play the occasional game in the middle of the night. I'd gladly pay to watch the matches I want to see. But I can't. As a result, I hardly watch any matches anymore. But if there's a big one, then my one and only option is to watch it live on the internet. What else can I do?

The problem is such that there are large international communities all over the world, telling people where to watch games live on the web. Some websites even charge a fee to provide a high-quality online stream. The charges cover hosting costs and, once there are enough people connected, they accept no more customers for fear of dropping quality. So people are actually PAYING to watch these matches illegally when they could watch them illegally for free!

*Overseas Content*

This problem reappears in many other areas too. A major one concerns Japanese anime (cartoons). Ars Technica did an excellent investigation into this matter. It found that there were huge online communities sharing copyrighted content, but that money was not a reason for doing so. Typically, when a cartoon appeared in Japan, it would take a year for it to appear overseas. When it did appear it would be dubbed with dumb-ass American dialogue which obliterated many of the cultural references which made the cartoons popular in the first place.

One of the 'infringing' community websites then did what one would hope the rest of the media industry would do - it realised that there was an enormous demand for overseas content to be aired online immediately after publication and that people would happily pay for it.

*The BBC*

Recently, the BBC launched iPlayer in Australia. This gives you access to much of the BBC's vast television archives. To a degree, this has long been desired by overseas residents. But the dominating discussion was all about the BBC's failure to allow payment of an overseas licence fee to let international viewers watch Live BBC content.

I lived in Japan several years ago, and people from all nationalities said at the time they'd love to pay to watch live BBC TV. The demand is enormous but when I recently asked the BBC, they said it wasn't going to happen.

Sure they get huge sums for licensing internally-produced programs and series, but they may get even more by allowing online access to international paying customers. However, even if this did happen, there would be issues with the BBC covering international events. A good example is Formula 1. Many Australian F1 fans baulk at Channel Ten's coverage and are only too glad for the switchover to the BBC's outstanding race commentary. Not having to suffer ads or Mark-Webber-obsessed presenters who struggle to contain their disappointment at not talking about V8s or motorbikes is a constant bugbear for many. Those who know about the internet know how to stream the BBC's coverage Live so there are no ads or interruptions. You can't pay for that though. Many would if they could.

But if there's one prime example of the problems of surrounding the BBC, copyright infringement and international viewers it's a certain program with 350 million viewers worldwide...

*Top Gear*

I used to watch Top Gear. I can't now. There are several hundred thousand Australians who are in the same boat. SBS picked it up long ago and built a regular audience of over a million people. Then 9 bought the rights and quickly decimated the audience. It's around 400,000 now. How on earth did it manage to do that in car-crazy Australia?

First you need to know that the BBC sends out an international version of Top Gear to overseas licensees which has 15 minutes cut from each show - to allow for ads. Consequently, if you want to watch a full episode of Top Gear your only option is to download it illegally from the web, or wait ages for the DVDs to appear. Then there was the fact that SBS had a two-YEAR delay in showing episodes. Nonetheless, a million loyal fans watched it and I was one of them.

After switching network, Channel 9 bragged about fast-tracking UK episodes. All sounded good. But then it took the already-short international version and butchered it by cutting more content out to add even more ads. The following week, despite promises of a new episode it showed an ancient, years-old episode. Apparently, it was OK to say this was a new episode because it was new to Channel 9. Cue ten years' worth of old episodes appearing randomly interspersed with more--recent episodes and Blam! the audience walked. I'd happily pay to watch Top Gear. Channel 9 makes it unwatchable. My only option is to download it. I haven't watched it in years.

*The Music Industry*

Piracy has affected few industries more than music. Back in the early days of the internet, services like Napster, Kazaa and Audio Galaxy appeared which let you swap songs with other people online. At the time, there was no talk of copyright infringement, it was just something that geeky internet users did and it felt like a more-efficient way of swapping cassettes and CDs in the playground. Unfortunately, it was so efficient that the global and industrialised scale destroyed the traditional way in which music was produced and marketed. Quite rightly, the services were shut down. But the story doesn't end there.

The age of compressed music formats and MP3 music players had begun. Once the third-generation iPod hit the market, along with iTunes, compressed digital music became mainstream. What a great opportunity for the music industry: the customers wanted compressed music delivered online and it was cheap to do. But could the industry have screwed things up any more?

Rather than give customers what they wanted publishers threw every toy they had out of the pram and hit the litigation button. One example saw the recording industry sue a 12-year old girl and won $2000. From her point of view she was simply using a free service on the internet that all her friends were using and discussing. One wonders how happy the recording industry was with its $2000 payout. Over the years industry bodies have spent far more money suing people than they recouped through the courts.

One of the main reasons we all have anti-piracy slogans embedded in our brains is because the music industry chose to try and protect its existing market and revenue streams at all costs and marginalise and vilify those who didn't want to conform to the harsh new rules being set.

The Napster brand went legit, iTunes rose and Sony started offering its vast music catalogues online. But instead of selling the compressed music that the public wanted, the industry "sold" music riddled with Digital Rights Management (DRM) 'copyright protection' meaning that the music would only play back on certain devices under certain conditions. Music was also being sold using formats which wouldn't work on all music players and compressed to degrees that resulted in a loss of quality which turned-off enthusiasts. In short, despite selling the music, you didn't own what you'd bought. You were essentially "renting" the rights to the music. Shouldn't there have been intervention from the government?

After a while Sony got bored of the lack of traction which its appalling model had generated and turned off its entire system. This meant that everyone who had "bought" music from Sony couldn't play it on anything other than the old devices launched to go with it. People who had invested heavily in Sony's music were ignored.

Around this time Sony also came up with other ways to stop people listening to the music they had bought. A system appeared which inserted noise and interference when people tried to compress music from CDs. Consequently, if you only listened to MP3 music, you couldn't actually legally get an MP3 version of a song. Even if you had paid money for a CD. Sony even topped this by secretly putting computer software on its audio CDs which secretly installed licensing software on your computer if you tried to compress the music on it. Not only was this a gross breach of privacy, but the 'rootkit' that was installed was a major security threat. This was one occasion where Sony got hammered for its actions. Ultimately, though the publishers were treating their paying customers as potential criminals and the widespread resentment was palpable.

As time wore on, it became clear that the DRM on music was linked to the original hardware you had when it was bought. For many people, if you bought legitimate compressed music online, like I did, when you go to play it you get the following message...

*www.abc.net.au/technology/images/general/general/musicfail.png​
I paid good money for those songs. Am I supposed to buy them again? Or can I download them illegally from the internet in clear conscience?

Things seem to be slowly changing with Apple offering DRM-free, higher-quality songs on iTunes now and with the industry recognising the importance of the online music store. Nonetheless, you're still forced to buy from one seller, using one format and at a quality which, these days, could be higher. The best sales model surely came from legally-spurious site, AllOfMp3.

This Russian based site allowed you to purchase almost any song in any format using any level of compression that you wanted and charged a low price for it. In other words, it recognised public demand and gave people exactly what they wanted.

But its licensing model was dodgy at best. It did pay royalties but at tiny Russian radio-play levels. Many of the songs were sold without permission from the copyright holders. It got sued by everyone for a staggering $1.65 trillion, but was eventually acquitted.

Outlandish lawsuits like this have become the norm for media publishers and their industry organisations. At no point did they realise that this was the most obvious business model to use - to give people what they want at a fair price.

Nowadays, the publishers seem to have moved on. They're still suing downloaders and crippling innovative internet-radio business models like Pandora, but the new popular model seems to be charging a subscription fee for on-demand access to entire music catalogues - iTunes' iCloud music service, Last.fm and Sony's Anubis are good examples. I've used the latter for months and it's excellent.

*Movies*

Heaps of movies are illegally downloaded these days, but unlike the music industry, the film industry is thriving. Theories abound as to the impact of downloading movies over the internet: there is evidence which suggests that those who download movies tend to be enthusiasts who spend more on movies in the first place (as is the case with music downloaders). Certainly the cinema trade is booming. My pet theory is that many downloaders download movies they aren't particularly fussed about seeing (not enough to pay for them anyway) or which are unavailable where they live. But the constant engagement with movies keeps them in the "film enthusiast" bracket and that makes them go to the cinema when something that they're particularly keen on appears.

Hysterical lawyers say otherwise. More on that below. Either way, movie downloading is a contentious business as are its consequences.

There is obviously a huge public craving for movies and video on demand but the only place that you can get many movies is illegally online. Legal services in Australia tend to, well, suck. Tivo has boasted for years about the thousands of movies you can pay for on demand. Most of them seem to have Marilyn Monroe or John Wayne in them. Selections aren't much better elsewhere. If you want to pay for good video on demand services the best you can do is pay for quasi-legal access to American sites like Netflix. Or download illegally. Either way, you're probably a criminal.

*www.abc.net.au/technology/images/general/general/billboardcopyright.jpg
_Pirates are terrorists. Tell the police. This image is used under fair dealing._​
*Fair Dealing*

In Australia, America and other countries, there are laws which protect people from innocently using "copyrighted" media in non-commercial and various reasonable ways. But don't expect to find authorities standing up for your rights.

Youtube is a prime example. If you make a video of something, but in the background there's a song playing - from a nearby radio or whatever - it gets banned. Want to share your child's birthday party with friends and family? You'd better not play any recorded Happy Birthday song in the background - you'll get your account suspended.

Other bans stem from people making their own movie mashups or discussing clips from mainstream media.

It's difficult to imagine what lawyers and publishers have to gain by banning people from doing this and vilifying them for doing so. Youtube got sick of dealing with individual take-down requests and waved the white flag long ago. It just bans things automatically now.

Almost all of these infringements are actually allowable under Fair Use (America) and Fair Dealing (Australia) legislation. But to the publishers and establishment, it too-often seems, you're just a criminal.

*Harsh Litigation*

Most troubling of all is that prosecuting people for suspected copyright infringement has become an industry in its own right. Legal firms are buying the rights from publishers to sue people on their behalf. It's an evolution of the ambulance chasing lawyer. There's a straight-forward business model for it.

You send out letters to potential copyright infringers telling them that they have downloaded something illegally and will be sued for anything up to $150,000. They have the option to settle beforehand - typically for a few thousand dollars - just enough to save on hiring a lawyer to defend the case.

The threat is based on the fact that if you have downloaded a movie then you will also have uploaded it and distributed it to thousands of people. In reality, however, if you download something using bittorrent only a small fraction gets uploaded. It would take balls of steel and deep pockets to explain that one in court though.

In America the music and movie lobbies have pushed through a non-government accord which allows corporations to punish suspected copyright infringers without any trial or due legal process. The US government, it transpires, has few issues with this. It's not yet clear whether Australia will follow along similar lines.

Industry bodies are certainly wanting to enforce their will on Australian legislation, though, as the battle between iiNet and AFACT illustrates. AFACT has been defeated several times but hasn't given up. More recently, The Age uncovered a new Gold Coast operation which is planning to demand money from downloaders of porn. There are fears that this could be the thin end of the wedge for Australia.

However, the article also points out that similar UK operations were eventually denounced in the House of Lords as "straightforward legal blackmail." So not all governments are as compliant as the copyright industry might like.

*Conclusion*

Nowadays, copyright barely resembles what it was originally designed for i.e. to protect both parties: inventors and content creators on the one side and the public on the other. Corporate America and government compliance have written out public interests in many instances. The case of Mickey Mouse is illustrative.

Nonetheless, there's an air of inevitability about it all. Historically, how often have incumbent, monopolistic industries shrugged their shoulders and written off their entire business model to embark on a journey along a crowded new highway, with rules set by customers, that leads who-knows-where?

On a personal note, I suspect that once the world's internet infrastructure comes up to speed, we'll all be using on-demand subscription models and the notion of buying content to keep will feel archaic. Even so, more needs to be done to protect the public from ham-fisted copyright industries demanding payment for everything.

A great deal of copyright infringement does not stem from criminal behaviour. Much of it occurs simply because there is all-too-often no other way to legally access the content you want - even if you do want to pay for it.

It's worth remembering that there are many big losers because of piracy, but these have been well covered elsewhere. The video games industry, for example, is a major loser, but we'll deal with that another time. This article is one of few that deals with the flipside of the argument and so please remember that it is intends to describe and inform - not endorse any infringement. Has it changed your opinion on the matter or confirmed it? Let us know below.


----------



## sygeek (Oct 24, 2011)

*How to hire an idiot​*


Spoiler



Wow, I remember how idealistic I was when I was about to bring on my first employee! After dealing with bad bosses over my career, after doing a whole lot of thinking about how I was going to be a great boss, and after doing a whole lot of reading about how to hire effective people, I was really looking forward to it. I was going to:

-- Hire people smarter than myself, who get things done!
-- Trust them to do their job, let them do their job and give them enough resources to do it!
-- Pay them WELL and offer great benefits! Work at home! Sure, why not?
-- Give people second chances! Don't throw out resumes because of lack of buzzwords! Or disjointed writing! Or lack of education! It's all about Smart People who Get Things Done, not interviews or resumes or formalities! Have an open mind!

Only problem was, I couldn't quite afford an employee yet. By then I had been working a couple years by myself, earning good profits in the $200K range but it was based on just one or two sales a year, and each sale took 6-12 months to finalize. With so few customers I could easily go a year without sales, I feared, so had to set aside my profits to cover that. And if I were going to hire someone, I'd really want six months or a year's payroll set aside for them as well. I just couldn't afford that yet.

But of course without more employees, I couldn't make more money to pay for them. I was really at full "capacity," spending around six full-time months to land a sale, then the next six months servicing that sale before starting over again. So I knew my first employee would have to be a salesperson, but I just couldn't afford it and couldn't see how I'd be able to at my current rate...

*Bullshit*

Serendipitously, I was approached a little while later by a former VP of my big competitor, at my industry's main exhibition where I had a small booth. He was a friggin' VP of a $100 million a year company! Well, their former VP, he said. Wow though, I was flattered. I demoed my product to him, explained my company, and his mouth dropped open. He started gushing about how incredible my product was (well, it was, I guess) and asked why the "****" wasn't I selling $100 million a year?! I said well, I'm sort of at capacity... and... errrr... I'm more of an engineer, and, uh.... I don't know why. I didn't want to tell him what I feared, that it was just this thing I made on my own, some of the code was crap, and things like that just don't sell for millions.

Then he told me "**** man, I could sell $10 million of this a year!" His informality in that professional business setting I thought was a little strange, but...

Was it possible somebody could really do that? I sure couldn't, but maybe I was on the wrong path? I wasn't a business expert, so what did I know? He was a friggin VP of business development for a $100 million company! He must know what he's saying, right?

We talked a little more and I couldn't believe when he asked to work for me for free! Well, on commission. But hey, that's money I wouldn't have made anyway. If he brings in a million bucks a year in profit, he's worth 10% of that, certainly!

We settled on 10% and I'd pay for travel and some other expenses. No problem... if he could make the sales he insisted he could, he'd be well worth it. And with very little risk to me for all the work he'd be doing!

Just to make sure I wasn't being bullshitted, I called the competitor to ask about him. They verified that yes, he was a former VP there. Awesome, this guy was for real. And if he was good enough for them, he's good enough for me! We soon signed a deal.

*Alarm Bells*

I mentioned to my uncle (an experienced big-ticket salesperson) about this new guy I was bringing on board, and he told me to be careful because "guys like that will do anything to make the sale and don't care if they leave you high and dry. I've seen it LOTS."

Whatever, old man! Because I had the deal structured to account for that: he didn't earn commission until payment was received! It simply wasn't in his interest to do just "anything" to make the sale, because if the product wasn't as promised, the customer wouldn't pay and he wouldn't get his commission! Beautiful scheme. And I'd have all pricing authority so he couldn't sell it at a loss, either! Ha ha, nothing could possibly go wrong with this.

So I set off on my promises of being a great boss. I let my new sales guy do his thing, trusted his judgement, didn't ask to be CC'd on things, gave him the resources he needed, just set him loose. $25K in stuff he said we absolutely needed -- slick brochures, sponsor some conference, ads in the trade journal, coffee mugs, pens with our logo -- I readily paid for. I wanted him and us of course to succeed.

And really I was pretty damned honored having someone with his experience -- a friggin VP of a $100 million company -- working for me, and for free! Wow!

Okay though, this one thing didn't make sense: I had told him our cost for a particular solution we were giving an estimate for. I did that so he could figure out his commission, which was based on gross profit. I then got cc'd on a mail where he turned around and told the customer our exact cost, and "that means there's a lot of wiggle room on the price. I'm sure Bill will come down on it."

WTF? Why would a salesperson tell the customer our cost?! I mean, isn't that just common sense? I asked him and he said something about "don't worry, they know we make a profit."

Well that didn't make sense -- that seemed pretty stupid actually -- but this guy was a friggin VP of a $100 million company! I was honored to learn business from him!

Then there was this other strange thing: a customer asked if they could see a demo, so he asked me to approve the travel cost. Just knowing how the sales process works, I told him I didn't like spending money on demos until we were sure they had money and were really ready to buy. So he emailed them (cc'ing me) "Do you have money? Are you ready to buy? We don't give demos unless you are."

Why in the world would you say that to a customer?! But... he was the VP of a $100 million company, after all! He must know these customers extremely well, and maybe it's... maybe some kind of inside joke thing? Just how executives talk to each other?! Wow, I had so much to learn!

Hmmmm, then there was this other thing that didn't make sense either: he sent me a sales forecast, and in the "absolutely certain" column he had $5 million in sales over the next three months alone (!) I mean, holy ****! But wait: that one company on there -- I could have sworn they told me just six months ago they didn't have anything in the budget, but maybe in a couple years? And suddenly now they're ready to buy? Just like that? I asked him and he assured me that yes, they have money now and are definitely buying from us. Definitely! 100% certain.

Awesome! I mean this guy was a friggin VP of a $100 million company!!! There was so much I would get to learn from him!!!! $5 million in sales in three months!!!!11!!!

Hold on. Then he sent me a proposal he had been working on over the past month, for final review and "second set of eyes." I had previously sent him all my boilerplate proposal and price quote templates to show him what's worked for me in the past. I figured he could just fill out with the customer's particulars like I had done over the past couple years, and save a lot of time. But no, he said he was going to write a totally new awesome proposal package guaranteed to win. That's what he used to do as VP of the friggin $100 million company, after all! I told him great, I can't wait to see!

I started reading this thing and my face dropped in horror. It was the writing of a grade schooler. I'm no professional writer either, but... it was absolutely awful. Simplistic writing, full of cliches, full of grammatical errors, and absolutely lacking in any structure. It was just random thoughts strung together, topics bouncing around from idea to idea from one sentence to the next. There was no exposition of the customer's problem and how we were going to solve it, it was just him gushing about how "great" our product is and how "lots" of people like it. It was dizzying to read because there was no logic behind it -- it was along the lines of "This product is great. You will like this product, guaranteed. It has feature A. Feature C is great because it's so easy to use! It has feature B. The other great thing about feature C is tons of people told us they love it. Tons. It has feature D." (New paragraph)... on and on for 15 pages.

Okay, how could a friggin VP of a $100 million company read something like that and think "That's it! Yea!"?

I didn't care whether he was an experienced VP or not, I had to ask him WTF he was thinking, hopefully without offending him (too much). "Ummm, it was... interesting," I carefully offered, "but I'm just curious: did you proofread this at all?"

"Oh sure, I ran it through spell check and had my wife check it out too" he proudly replied.

"Ok, well... uhhhh... hey, didn't you also used to write proposals at [former company]?"

"Yep! Well, not exactly... other people wrote them I guess, but I oversaw it."

"Okay..."

"So -- what do you think? Kick ass, huh? I think this is a shoe-in for us! I really do, I can feel it."

Here unfortunately I sort of lost it. $100 million VP or not, that document was ****. No, I'm not a writer either, and no, our customers aren't English teachers, but what the ****? I can't put my company name behind that! It was ****. I told him that. I asked him what the **** he was thinking, why would he even set out to write a proposal if he knew he couldn't write -- I mean, why bother? A whole month to do that?!

He apologized. He said he was trying to do well, and he really thought he could write well, but "apparently I can't, and I accept that."

------------

I left to cool down and think about it more. Okay, no problem. So what, the guy can't write. We can use my previous templates and I'd just modify them for each new customer. We're talking about $5 million coming down the pike, after all! I'd write them myself all day for that kind of money! Woo-hoo!!

I stayed up all night rewriting the proposal, and we moved forward.

Well, the three months came and went. No sales. The proposal I wrote? Turns out they had never asked for it and didn't have money but thanked us for sending it. Uhhhh...

And the other $4.5 million in sales we were getting that month? A couple others "suddenly lost their funding." Another "got delayed by other problems but they're buying next month." Another was "I don't know what happened... I'm trying to find out."

But any week now! Any week was going to be the first big order! Just have patience! I mean, this guy was the VP of a $100 million company, after all! Who was I to question him? I was just some programmer who found myself in sales only because I had to.

Six more months went by. Not a single sale. Okay, well, it's a long sales cycle. I always figured I might go a year without a sale, so give the guy a chance. VP of a friggin $100 million company working for me for free! Woo-hoo!!

Still, I got more and more concerned. Something wasn't right. I suggested we start working "together" on sales since we both wanted them, after all, so could he start cc'ing me and we'd brainstorm ideas with each of these prospects? He thought that was a great idea.

So he started cc'ing me. And Oh My God. This guy was awful! Holy ****. His "sales technique" for the first new prospect I sent him consisted of literally begging the customer to buy "because our company is about to go in the shitter." Huh?! WHY WOULD YOU TELL OUR CUSTOMERS THAT?! And use obscenities in that kind of correspondence?! To a CUSTOMER?! I demanded an answer.

"Well, it's true, isn't it? Believe me, I've been in this industry for 30 years and they can handle it. That's just how these people are," he explained.

Okay, friggin $100 million VP or not, I was calling bullshit. My company does not correspond to people like that, that's not how you sell this product to these customers, that's not how people respond positively, that's not how to build a business! Bullshit.

And everything over the past almost 12 months, all the other bullshit started to come together. Really I felt awful, awful at being conned somehow, awful at myself for not checking up on him, for not even interviewing him, for not watching him, for just setting him loose and trusting him without "verifying." Everything he had told me was bullshit, all his forecasts, everything looking back at our correspondence about who had money, who was buying, everything he promised. All bullshit.

I took him out to dinner and we had a heartfelt Scooby-Doo reveal moment (you know, at the end of the show when all the masks would come off and the mystery would be explained):

Aha. Turns out the guy was a High School dropout. Got into drugs, booze, crime, turned his life around and got his GED. Went to work at a utility as a lineman and worked his way up. He had great people skills, remembered everyone's names, and that's really how he made the connections to keep getting promoted. Delegated everything to subordinates. Retired from that near the top and worked as an industry consultant because he knew everyone in the business. Did some work for the competitor. They liked how he knew all the top people at all the top customers, and offered him generous employment. He really wanted to be a VP so they said sure, how about assistant VP of business development. ("Whatever, just set appointments for us," was actually probably more like it).

He got fired within the year, he admitted. He said it was a "personal disagreement" but I wouldn't doubt it was utter incompetence.

And nope, he'd never done sales in his life. His job used to be setting appointments, mingling with customers at conferences, and getting their sales team in the door to make the sale. But he himself didn't do sales. Had no clue what was involved, had no clue what process customers go through to make a purchase, had no clue about techniques like "consultative selling" or who you have to convince in a business or institution to close a sale. No clue. But gosh, he was eager and willing to learn and felt great about this opportunity I was giving him!

Well, at least he was honest. He wasn't trying to deceive me, and he really thought he could do it, he explained. No hard feelings. But I didn't need an entry level salesperson, I needed an experienced salesperson right now. I told him he had to go, and he understood.

Sadly, I could have found out all of that by simply asking him before offering him the deal. I just never did. I mean, he was a friggin' VP of a $100 million company, after all!

----------------

Every time I relate this experience, I get a lot of head nods. I guess it's pretty common among business owners and anybody involved in HR, to get employees who just don't turn out as promised. But damn, I didn't think it would happen to me. I mean, I was prepared! I read a lot of books! I knew all about bad employees and how to avoid them! I was smart, dammit!

Well, my company survived. I went back to basics with my old way of selling and soon landed another nice sale. Then my next hire was a salesperson again, but thankfully this time I knew to check up on him before the hire, and knew to have him explain his strategies and techniques in the interview to make sure he knew his stuff. Thankfully, he's turned out to be a really good guy and so far has been doing really well.

And unfortunately what I really learned from this is something I actually already knew from my first year of employment right out of college: business executives are sometimes just full of ****!


----------



## Nipun (Oct 25, 2011)

sygeek said:


> *How to hire an idiot​*
> 
> 
> Spoiler
> ...


This one is really great!


----------



## nisargshah95 (Oct 26, 2011)

sygeek said:


> *How to hire an idiot​*
> 
> 
> Spoiler
> ...


Nice read. Keep your work up and keep more articles coming.


----------



## noob (Nov 2, 2011)

gr8 thread..


----------



## sygeek (Nov 12, 2011)

*All Programming is Web Programming​*By Jeff Atwood​


Spoiler



Michael Braude decries the popularity of web programming:


> *The reason most people want to program for the web is that they're not smart enough to do anything else*. They don't understand compilers, concurrency, 3D or class inheritance. They haven't got a clue why I'd use an interface or an abstract class. They don't understand: virtual methods, pointers, references, garbage collection, finalizers, pass-by-reference vs. pass-by-value, virtual C++ destructors, or the differences between C# structs and classes. They also know nothing about process. Waterfall? Spiral? Agile? Forget it. They've never seen a requirements document, they've never written a design document, they've never drawn a UML diagram, and they haven't even heard of a sequence diagram.
> 
> But they do know a few things: they know how to throw an ASP.NET webpage together, send some (poorly done) SQL down into a database, fill a dataset, and render a grid control. This much they've figured out. And the chances are good it didn't take them long to figure it out.
> 
> ...


Let's put aside, for the moment, the absurd argument that web development is not challenging, and that it attracts sub-par software developers. Even if that was true, it's irrelevant.

I hate to have to be the one to break the bad news to Michael, but for an increasingly large percentage of users, the desktop application is already dead. Most desktop applications typical users need have been replaced by web applications for years now. And more are replaced every day, as web browsers evolve to become more robust, more capable, more powerful.

You hope everything doesn't "move to the web"? Wake the hell up! It's already happened!

Any student of computing history will tell you that the dominance of web applications is exactly what the principle of least power predicts:


> Computer Science spent the last forty years making languages which were as powerful as possible. *Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful*. The less powerful the language, the more you can do with the data stored in that language. If you write it in a simple declarative from, anyone can write a program to analyze it. If, for example, a web page with weather data has RDF describing that data, a user can retrieve it as a table, perhaps average it, plot it, deduce things from it in combination with other information. At the other end of the scale is the weather information portrayed by the cunning Java applet. While this might allow a very cool user interface, it cannot be analyzed at all. The search engine finding the page will have no idea of what the data is or what it is about. The only way to find out what a Java applet means is to set it running in front of a person.


The web is the very embodiment of doing the stupidestsimplest thing that could possibly work. If that scares you -- if that's disturbing to you -- then I humbly submit that you have no business being a programmer.

Should all applications be web applications? Of course not. There will continue to be important exceptions and classes of software that have nothing to do with the web. But these are minority and specialty applications. Important niches, to be sure, but niches nonetheless.

If you want your software to be *experienced by as many users as possible*, there is absolutely no better route than a web app. The web is the most efficient, most pervasive, most immediate distribution network for software ever created. Any user with an internet connection and a browser, anywhere in the world, is two clicks away from interacting with the software you wrote. The audience and reach of even the crappiest web application is astonishing, and getting larger every day. That's why I coined Atwood's Law. 


> Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript.


Writing Photoshop, Word, or Excel in JavaScript makes zero engineering sense, but it's inevitable. It will happen. In fact, it's already happening. Just look around you.

As a software developer, *I am happiest writing software that gets used*. What's the point of all this craftsmanship if your software ends up locked away in a binary executable, which has to be purchased and licensed and shipped and downloaded and installed and maintained and upgraded? With all those old, traditional barriers between programmers and users, it's a wonder the software industry managed to exist at all. But in the brave new world of web applications, those limitations fall away. There are no boundaries. Software can be everywhere.

Web programming is far from perfect. It's *downright kludgy*. It's true that any J. Random Coder can plop out a terrible web application, and 99% of web applications are absolute crap. But this also means the truly brilliant programmers are now getting their code in front of hundreds, thousands, maybe even millions of users that they would have had absolutely no hope of reaching pre-web. There's nothing sadder, for my money, than code that dies unknown and unloved. Recasting software into web applications empowers programmers to get their software in front of someone, somewhere. Even if it sucks. 

If the audience and craftsmanship argument isn't enough to convince you, consider the business angle.


> You're doing a web app, right? This isn't the 1980s. Your crummy, half-assed web app will still be more successful than your competitor's most polished software application.


Pretty soon, *all programming will be web programming*. If you don't think that's a cause for celebration for the average working programmer, then maybe you should find another profession.


----------



## sygeek (Nov 15, 2011)

*Invasion of Privacy.​*


Spoiler



*UPDATE (1/12/2011):*
I received an email from Steve regarding this post. He sincerely apologized for his actions and realized now that what he did was wrong and simply asked that I modify the post to protect the identities of his family. I felt that this was a fair request, considering that his family had nothing to do with what Steve did and it doesn’t jeopardize the impact of the article. So, if you’re wondering why you’re seeing all the “[withheld]“‘s, that’s why!

PS – Yes, I realize the names are still shown in the images, but they’re not indexed by Google. I figured I’d point this out before I had 20,000 comments informing me of it. 
*END OF UPDATE*

*DISCLAIMER:*
This is ABSOLUTELY for informational purposes ONLY. attackvector.org nor I will be held responsible for how you choose to use the information that I post on my blog. This individual, though he is a douche for sending spam, is a real person with a real life. By misusing the information found here, you have the power to potentially destroy someones real life. There’s a fine line between a legal hack and a felony. Information gathering is not illegal so long as it’s obtained through legal means. Using the information, however, is quite another story.

*UPDATE:* Because of something that one of my readers brought up, I want to clarify. The email that I received was not the run of the mill malware/spambot/whatever style email. The email was coming from his email address, using his business’s name, and advertising his business. I would have never posted this had I had any doubt that this may not have actually been sent, by him, in some fashion.
*END OF DISCLAIMER.*

I use spammers and pedophiles as test subjects when I’m working on something. This is mostly because it’s unlikely that they would go to the authorities and point the finger at me, knowing that I could easily turn around and say something to the effect of, “Well, yes I did pwn his box.. but you should have seen all the child porn I found on it.” owned x 2.

I happened to receive a piece of spam at the exact moment as I was going to start a post about privacy and anonyminity on the internet. I will consider this to be a sign from God that this dude needed to be set straight. Okay, maybe not. I’m not sure what the bible says about spam.. but if I were God, it would be into the pits of hell for them. So, since I cannot cast people into eternal suffering in a firey pit, I will have to settle for second best. Pwnage!

Whats even better, none of what I’m about to do is illegal. It’s a serious, serious invasion of privacy, and you definitely don’t want it to happen to you, but all of it can be harvested through public record, social networks, forum posts, etc etc etc.

First, lets take a look at the email that I received.


> ..snip..
> Received: from unknown (HELO p3pismtp01-017.prod.phx3.secureserver.net) ([10.6.12.17])
> (envelope-sender )
> by p3plsmtp09-04.prod.phx3.secureserver.net (qmail-1.03) with SMTP
> ...



Ok, so, his email address is steve@barteritemsfortrade.com.. he’s sending email through server299.com.. and his real IP address is 67.185.122.64. All we really need is his email address and his IP. Lets see what we can find.


> Non-authoritative answer:
> 64.122.185.67.in-addr.arpa name = c-67-185-122-64.hsd1.wa.comcast.net.


Now we know that he’s connecting from Washington (*wa*.comcast.net). Lets see what Geo IP location says. I use this service, but there are many others. I’ve also written a few tools to do this as well, but we’re going to use what the average Joe has access to.

Just put the IP address in the box and hit “search”. Here’s what we find.


> Region: Washington
> City: Spokane
> Postal code: 99205



So, we’re narrowing it down.. we now know that it’s Spokane, Washington. Now we’re going to take a look at his email address. First, obviously, just google the email address. This will bring up information for virtually anything that the person has ever used their email on. Forums, social networks, etc.

In this case, however, nothing came up on google. We must dig deeper. Enter, whois!


> BIZ TWO, LLC
> PO Box 8421
> Spokane, Washington 99203
> United States



Biz two? Does that mean there is a Biz One and a Biz Three, perhaps? Also, he’s using a PO Box.. blah.


> ..snip..
> Administrative Contact:
> Nicholas, Steve steve@bestimpressionz.com
> ..snip..
> ...



Jackpot! We now have a last name and a phone number. We also have an additional email address/domain.


> Administrative Contact:
> Your Logo Here snicho@juno.com
> 139 west 30th Avenue
> Spokane, WA 99203
> ...



Hmm.. a real address.. no PO box on this domain. Is that an office? A house? Is it his house? I can assume that ‘snicho’ is short for ‘steve nicholas’, and it’s the administrative contact, which means he owns the domain.. so the address has something to do with him.

Enter.. Google Maps.  

*www.attackvector.org/pics/13930.png

(If you’re wonder why it says “140 west 30th” and not “139 west 30th”, it’s because I slid the camera down a bit and Google tried to be helpful by changing the address)

Well, it’s definitely not an office building, so at this point I’m going to assume that it’s his house until I find out differently. We can further verify this by googling his name + city + state.

*www.attackvector.org/pics/nameres.png

That address looks rather familiar… oh yeah, it’s the address that was associated with his domain. We can be virtually certain at this point that that is his real address and house. Lets see who else lives in the house with him – just google the phone number listed.

*www.attackvector.org/pics/phoneres.png

Ok, so, [withheld] has the same last name as Steve, so I think we can safely say that this is his wife.

We’ll come back to her later. Lets see what else we can find about Steve.. I’m really starting to feel like family at this point. 

Back when I googled his name + city + state, I noticed that below the address result, there was a LinkedIn page.. lets check that out.

Ok, so there’s all sorts of useful information.. but I found another email address.. steve.nicholas@itex.net Not often do I meet someone with as many email addresses as me.. lol.

So, back up to the top, we google for steve.nicholas@itex.net.

Some interesting stuff, but nothing really useful for my purposes. Lets check out Facebook and see if he’s a social butterfly. I log in and “search for friends” and enter his email address(es). His account is registered with the itex.net email address.

He doesn’t have his Facebook stuff set to private, so he’s kind of letting it all hang out. Thanks, Steve!

*www.attackvector.org/pics/stevefb.png

Yawn. The only thing interesting there, is that we’ve now definitely verified that that address is correct and that his wife’s name is definitely [withheld]. Maybe her page is more interesting.. lets look

Note: Passwords.. by building a profile of someone, you begin to get a feel of who they really are. I’m willing to bet that at least one of Steve’s passwords has something to do with fishing, trout, or cutthroats (type of trout – according to his facebook page).

[withheld]‘s Facebook:


> I teach 7th & 8th graders at Salk Middle School in Spokane WA. I married Steve 27 years ago and we have 2 daughters, [withheld] and [withheld]. [withheld] married [withheld (both first & last name)] 2 years ago and they are expecting their first child in March. [withheld] is an attorney and [withheld] is a special education teacher. [withheld] is living in Las Vegas where she teaches special education to preschoolers and kindergarten. We have an awesome family!!!!



Here’s something to take a mental note of. Women are generally more open about their personal lives and love to share with others. In one paragraph, we learn that she teaches at Salk Middle School, they’ve been married for 27 years, they have 2 daughters, [withheld] and [withheld], [withheld] is married to [withheld (both first & last name)] (note – this probably means that [withheld] is no longer [withheld] Nicholas, she’s probably [withheld (both first & last name)]). [withheld] lives in Vegas.

How ever would we find out more information about [withheld] and [withheld]? Oh yeah, friends lists. If the parents have Facebook, the kids most certainly have Facebook.. and barring any family drama, they’ll all be on each others friends lists. And, of course, I’m right.. found [withheld], [withheld], and [withheld].

Also, going through her wall posts gave up some information. They’re new grandparents.. their grandaughter [withheld] was born on March 15th.. this was [withheld] and [withheld]‘s daughter.

Now, lets see what Intelius says about [withheld] (note – I skipped Steve on Intelius because his entry is all screwed up.)

*www.attackvector.org/pics/intelius.png

Now we have ages, too. It’s interesting that there’s a “Ralph Steve Nicholas” listed, who has the same age as the other two Steve’s listed. Could Steve’s real name be Ralph??

Ok, anyway, lets see what I can find out about their house. Just about every county in the country allows you to view property tax records on the internet. I googled “spokane washington property tax records”. What you’re looking for is like, the assessor’s home page then just punch in the address and you can find a wealth of information.

What this record tells us, is that [withheld] actually owns the home.. Steve isn’t even listed. She’s also the sole person listed paying the property taxes. Interesting.. I wonder why?

Also, further down on the report, there’s two documents. A quit claim deed, and a statutory warranty deed. A warranty deed is issued in some states when a house is sold. It protects the buyer from having third parties come after them for unpaid debts and whatever. So, it appears as though they bought the house in 2001 for $110,000? Seems awfully low.

Now, lets look at the quit claim deed. First thing I notice. R Steve Nicholas is listed as “Husband of Grantee” I think Steve’s real name is Ralph. lol.

This is interesting.. quit claim deeds are used after a divorce to switch the owner of a property from one party to another at the county level. But they’re still married. The other times that I’ve seen quit claim deeds used is when people encounter serious financial trouble and need to file bankruptcy. They file independently and deed the house to their spouse.

Lets find out!

I am not going to tell you what service I use to obtain this information because I don’t want it to get abused and taken away. Also, I don’t think everyone should have access to it. SO.


> 91-40727 Ralph Steven Nicholas and [withheld (first & middle name)] Nicholas
> Case type: bk Chapter: 7 Asset: No Vol: v Judge: John C. Minahan Jr.
> Date filed: 05/08/1991 Date of last filing: 02/11/1993
> Date terminated: 02/11/1993



Ok, so they did a joint bankruptcy in ’91 and it was discharged in ’93. I also have a list of their creditors.. no wonder they filed bankruptcy. Ouch.

One other piece of information that this offers, is previous addresses and the last 4 digits of their social security numbers. Keep in mind, a lot of people use the last 4 digits of their social for pin numbers.. because most pin numbers are limited to 4 digits. Stupid.

*UPDATE*: I’ve decided to X out the social security numbers because this post is starting to receive a ton of traffic and I’m not sure I want everyone visiting it to have this information. My intention of this article is not to make it easy to steal this guys identity.. it’s to point out a vulnerability. If you really want to find his social security number, lets just say.. it’s available via the internet.  


> Debtor
> Ralph Steven Nicholas
> 6747 Crooked Creek Dr.
> Lincoln, NE 68516
> ...



Here’s something to really think about.. I was able to obtain all of the information in this post for 16 cents and by just using an email and IP address from a piece of spam.

Family members, ages, schools, anniversary dates, marriage lengths, hobbies, interests, phone numbers, addresses, property records, property taxes, pictures of their house, pictures of them, pictures of their children and grandchildren, deeds on their house, bankruptcies, employment history, previous addresses, previous creditors, and bits of social security numbers.

I’m pretty sure I’d be able to fake my way through one of those password reset forms.. you know, where you set up a “secret question” asking what your dogs name was, or where you went to school?

Beyond that, I’m fairly confident that at this point, if I were to call his bank and pretend to be him, I could easily pass when they asked me personal questions.

In closing.. you really need to pay close attention to what you’re posting on the internet. If I were a douche, I could ruin this guys life using this information. There are a lot of douches out there that are doing this type of stuff right now. Given an email address, phone number, or whatever, they build profiles on people which can be used to exploit them and steal identities.

The other thing that I’ve actually fallen victim to, is the speed of Google’s spiders and the fact that they index Craigslist. Lets say you run a business.. Catholic Charities R Us and in this post, you include an email address, phone number, something. Lets say you also make a post, days, weeks, whatever, later looking for whores, or something. Both of those posts will come up when Googling for your phone number.

Also, consider what you’re sending in this email. What if this guy had sent me an email trying to extort me, threaten me, whatever? I could turn this over to the authorities and they’d have their work cut out for them.

Not to try to scare people too much, but think about single women in the dating scene. They make a post somewhere with their email address and someone comes across it and is able to determine the same amount of information about them as what I did above? What if that person was more interested in something other than identity theft?

I think you get the idea.. essentially.. guard your personal information with your life. Never post your phone number on the internet (unless you’re using a proxy number, which is what I do), and make sure no personal information is associated with your email address before you go firing off emails to strangers.


----------



## sygeek (Dec 1, 2011)

*Saving a life is easy, but I didn’t​*By Dan Shapiro​


Spoiler



I was reading Hacker News a few weeks ago and I stumbled on a story: Amit Gupta needs you. It turns out that Amit is the thoroughly likeable founder of Photojojo.  Amit had the double misfortune to:

a) have acute leukemia, and

b) be South Asian.

The problem with the first one is obvious.  The problem with the second one is that the life-saving marrow transplant that Amit needs requires a donor with a similar genetic makeup, and South Asians are dramatically underrepresented in the registered donor pool.

I read the amazing page dedicated to finding Amit a donor, and thought back to 1995.  I was in my second year of college and there was a blood drive.  A representative from the National Marrow Donor Program was there near the cafeteria in the quad while I was donating.  She explained the marrow registry and asked me to sign up to be considered for a match for a marrow transplant.

At the time, the only way to donate marrow was to basically have someone drill holes in your bones and drain your skeleton, which kind of terrified me.  Nowadays, of course, most donations require nothing more than sitting still for a few hours with an IV watching television.  But after a lot of introspection, I decided that it was a rare occurrence in this world that you actually get to save the life of a stranger, and if skeleton-draining was the price of that, then so be it.  I was also reassured that most folks are never matched with anyone.

Back to Amit and the present, it was clear that my genetic makeup wasn’t going to be much help for him.  But I went over to marrow.org and looked around.  I learned that it’s ridiculously easy these days to get tested and not very hard to donate if you’re matched.  Despite this, the need is skyrocketing.  Half of the people who need marrow transplants can’t locate a donor.

Then I realized – crap, how the heck are they going to get a hold of me if there’s a hit?  All they have for contact info is my college dorm address!  I can’t help Amit, but maybe I could help someone else in need.  So I fussed around with the website to update my contact data.  I couldn’t figure out how to find my old record, so I made a mental note to try and call them some time, and gave up.

Allow me to digress one more time before I get to the point.  Five months ago I sold my startup, Sparkbuy, to Google. There were mountains of paperwork, and one bit that didn’t get wrapped up nicely was mail forwarding.  Not email forwarding, mind you, but good, old-fashioned, paper-cut-on-your-tongue-from-sealing-the-envelope mail.  I submitted the change of address request, but for some reason, mail piled up in my old office.  They nagged me about it every few weeks.  I procrastinated. After many months I finally went and picked it up.

Today I was sorting through that mail.

Did you know that, when the marrow donation center finds a match, they try desperately to reach the potential donor?  Even if that person has moved from their dorm room long ago, even if their contact information has changed, even if they’re in a different state, even if 16 years have passed?  They try.  They look all over for ways to reach that person.

Almost 5 months ago, they found a match, and sent me a letter to the only address they could find for me.  To my old company.

Today I read it.

I called immediately, of course.  They said that they’d contact the patient’s doctor right away.  But they told me the odds were good that, since 5 months had passed, “they found another match, or that the patient… is no longer eligible.”


----------



## sygeek (Dec 6, 2011)

*The Reason Android is Laggy*
By Andrew Munn​


Spoiler



*Follow up to “Android graphics true facts”, or The Reason Android is Laggy*

Yesterday +Dianne Hackborn posted to Google+ an article that dismissed the common accusation that Android is laggy because UI rendering wasn’t hardware accelerated until Honeycomb:

*plus.google.com/105051985738280261832/posts/2FXDCz8x93s

It’s an insightful post that illuminates many of the complex issues with smooth Android rendering. Unfortunately, it doesn’t answer the fundamental question asked by both technical and nontechnical android users:

*Why is Android laggy, while iOS, Windows Phone 7, QNX, and WebOS are fluid?*

This post will attempt to answer that question.

However before I jump in, a couple disclaimers. First, I am a 3rd year undergraduate software engineering student. I interned on the Android team, and +Romain Guy who was responsible for much of the hardware acceleration work in Honeycomb, reviewed some of my code, but I was not on the framework team and I never read the Android rendering source code. I do not have any authoritative Android knowledge and I cannot guarantee what I say here is necessarily 100% accurate, but I have done my best to do my homework.

Second, I’m interning with the Windows Phone team starting in January, so it’s possible that this post will be unconsciously biased against Android, but if you ask any of my friends, it’s really hard to shut me up about Android. I have more Android t-shirts than days of the week and I’d rather give away my Macbook than my Nexus S. The Googlplex is like a second home - I’ve slept there on more than a few occasions to the dismay of startled janitors (and if you ever get a chance to visit, the banana french toast at Big Table Cafe is to die for). If anything, I’m probably biased in Android’s favor.

Finally, any opinions expressed in this article are solely my own and do not represent those of any past or future employers.

With that out of the way, lets dive right in.

Dianne starts off her post with a surprising revelation:
_
“Looking at drawing inside of a window, you don’t necessarily need to do this in hardware to achieve full 60fps rendering. This depends very much on the number of pixels in your display and the speed of your CPU. For example, Nexus S has no trouble doing 60fps rendering of all the normal stuff you see in the Android UI like scrolling lists on its 800x480 screen.”_

Hun? How can this be the case? Anybody who’s used a Nexus S knows it slows down in all but the simplest of ListViews. And forget any semblance of decent performance if a background task is occurring, like installing an app or updating the UI from disk. On the other hand, iOS is 100% smooth even when installing apps. But we know Dianne isn’t lying about the potential CPU performance, so what’s going on?

*The Root Cause*

It’s not GC pauses. It’s not because Android runs bytecode and iOS runs native code. It’s because on iOS all UI rendering occurs in a dedicated UI thread with real-time priority. On the other hand, Android follows the traditional PC model of rendering occurring on the main thread with normal priority.

This is a not an abstract or academic difference. You can see it for yourself. Grab your closest iPad or iPhone and open Safari. Start loading a complex web page like Facebook. Half way through loading, put your finger on the screen and move it around. All rendering instantly stops. The website will literally never load until you remove your finger. This is because the UI thread is intercepting all events and rendering the UI at real-time priority.

If you repeat this exercise on Android, you’ll notice that the browser will attempt to both animate the page and render the HTML, and do an ‘ok’ job at both. On Android, this a case where an efficient dual core processor really helps, which is why the Galaxy S II is famous for its smoothness.

On iOS when an app is installing from the app store and you put your finger on the screen, the installation instantly pauses until all rendering is finished. Android tries to do both at the same priority, so the frame rate suffers. Once you notice this happening, you’ll see it everywhere on an Android phone. Why is scrolling in the Movies app slow? Because movie cover thumbnails are dynamically added to the movie list as you scroll down, while on iOS they are lazily added after all scrolling stops.

*Other Reasons*

The fundamental reason Android is laggy is UI rendering threading and priority, but it’s not the only reason. First, hardware acceleration, despite Dianna’s reservations, does help. My Nexus S has never been snappier since upgrading to ICS. Hardware acceleration makes a huge difference in apps like the home screen and Android market. Offloading rendering to the GPU also increases battery life, because GPUs are fixed-function hardware, so they operate at a lower power envelope.

Second, contrary to what I claimed earlier, garbage collection is still a problem, even with the work on concurrent GC in Dalvik. For example, if you’ve ever used the photo gallery app in Honeycomb or ICS you may wonder why the frame rate is low. It turns out the frame rate is capped at 30 FPS because without the cap, swiping through photos proceeds at 60 FPS most of the time, but occasionally a GC pause causes a noticeable “hiccup”. Capping the frame rate at 30 fixes the hiccup problem at the expense of buttery smooth animations at all times.

Third, there are the hardware problems that Dianne discussed. The Tegra 2, despite Nvidia’s grandiose marketing claims, is hurt by low memory bandwidth and no NEON instruction set support (NEON instructions are the ARM equivalent of Intel’s SSE, which allow for faster matrix math on CPUs). Honeycomb tablets would be better off with a different GPU, even if it was theoretically less powerful in some respects than the Tegra 2. For example, the Samsung Hummingbird in the Nexus S or Apple A4. It’s telling that the fastest released Honeycomb tablet, the Galaxy Tab 7.7, is running the Exynos CPU from the Galaxy S II.

Fourth, Android has a ways to go toward more efficient UI compositing. On iOS, each UI view is rendered separately and stored in memory, so many animations only require the GPU to recomposite UI views. GPUs are extremely good at this. Unfortunately, on Android, the UI hierarchy is flattened before rendering, so animations require every animating section of the screen to be redrawn.

Fifth, the Dalvik VM is not as mature as a desktop class JVM. Java is notorious for terrible GUI performance on desktop. However, many of the issues don’t carry over to the Dalvik implementation. Swing was terrible because it was a cross platform layer on top of native APIs. It is interesting to note that Windows Phone 7’s core UI is built in native code, even though the original plan was to base it entirely on Silverlight. Microsoft ultimately decided that to get the kind of UI performance required, the code would have to be native. It’s easy to see the difference between native and bytecode on Windows Phone 7, because third party apps are written in Silverlight and have inferior performance (NoDo and Mango have alleviated this problem and the Silverlight UIs are generally very smooth now).

Thankfully, each of the five issues listed above is solvable without radical changes to Android. Hardware acceleration will be on all Android phones running ICS, Dalvik continues to improve GC efficiency, the Tegra 2 is finally obsolete, there are existing workarounds for the UI compositing problems, and Dalvik becomes a faster VM with every release. I recently asked +Jason Kincaid of +TechCrunch if his Galaxy Nexus was smooth, and he had this to say:

_“In general I've found ICS on the Galaxy Nexus to be quite smooth. There are occasional stutters — the one place where I can consistently get jitters on the Galaxy Nexus is when I hit the multitasking button, where it often will pause for a quarter second. That said, I find that the iPhone 4S also jitters more than I had expected, especially when I go to access the systemwide search (where you swipe left from the home screen).”
_
So there you go, the Android lag problem is mostly solved, right? Not so fast.
*
Going Forward*

Android UI will never be completely smooth because of the design constraints I discussed at the beginning:

UI rendering occurs on the main thread of an app
UI rendering has normal priority
Even with a Galaxy Nexus, or the quad-core EeePad Transformer Prime, there is no way to guarantee a smooth frame rate if these two design constraints remain true. It’s telling that it takes the power of a Galaxy Nexus to approach the smoothness of a three year old iPhone. So why did the Android team design the rendering framework like this?

Work on Android started before the release of the iPhone, and at the time Android was designed to be a competitor to the Blackberry. The original Android prototype wasn’t a touch screen device. Android’s rendering trade-offs make sense for a keyboard and trackball device. When the iPhone came out, the Android team rushed to release a competitor product, but unfortunately it was too late to rewrite the UI framework.

This is the same reason why Windows Mobile 6.5, Blackberry OS, and Symbian have terrible touch screen performance. Like Android, they were not designed to prioritise UI rendering. Since the iPhone’s release, RIM, Microsoft, and Nokia have abandoned their mobile OS’s and started from scratch. Android is the only mobile OS left that existed pre-iPhone.

So, why doesn’t the Android team rewrite the rendering framework? I’ll let Romain Guy explain:

_“...a lot of the work we have to do today is because of certain choices made years ago... ...having the UI thread handle animations is the biggest problem. We are working on other solutions to try to improve this (schedule drawing on vsync instead of block on vsync after drawing, possible use a separate rendering thread, etc.) An easy solution would of course to create a new UI toolkit but there are many downsides to this also.”_

Romain doesn’t elaborate on what the downsides are, but it’s not difficult to speculate:

All Apps would have to be re-written to support the new framework
Android would need a legacy support mode for old apps
Work on other Android features would be stalled while the new framework is developed
However, I believe the rewrite must happen, despite the downsides. As an aspiring product manager, I find Android’s lagginess absolutely unacceptable. It should be priority #1 for the Android team.

When the topic of Android comes up with both technical and nontechnical friends, I hear over and over that Android is laggy and slow. The reality is that Android can open apps and render web pages as fast or faster than iOS, but perception is everything. Fixing the UI lag will go a long way to repairing Android’s image.

Beyond the perception issue, lag is a violation of one of Google’s core philosophies. Google believes that things should be fast. That’s a driving philosophy behind Google Search, Gmail, and Chrome. It’s why Google created SPDY to improve on HTTP. It’s why Google builds tools to help websites optimize their site. It’s why Google runs it’s own CDN. It’s why Google Maps is rendered in WebGL. It’s why buffering on Youtube is something most of us remember, but rarely see anymore.

But perhaps the most salient reason why UI lag in Android is unacceptable comes from the field of Human-Computer Interaction (HCI). Modern touch screens imply an affordance language of 1 to 1 mapping between your finger and animations on the screen. This is why the iOS over-scroll (elastic band) effect is so cool, fun, and intuitive. And this is why the touch screens on Virgin America Flights are so frustrating: they are incredibly laggy, unresponsive, and imprecise.

A laggy UI breaks the core affordance language of a touch screen. The device no longer feels natural. It loses the magic. The user is pulled out of their interaction and must implicitly acknowledge they are using an imperfect computer simulation. I often get “lost” in an iPad, but I cringe when a Xoom stutters between home screens. The 200 million users of Android deserve better.

And I know they will have it eventually. The Android team is one of the most dedicated and talented development teams in the world. With stars like +Dianne Hackborn and +Romain Guy around, the Android rendering framework is in good hands.

I hope this post has reduced confusion surrounding Android lag. With some luck, Android 5.0 will bring the buttery-smooth Android we’ve all dreamed about since we first held an HTC G1. In the mean time, I’ll be in Redmond working my butt off trying to get a beautiful and smooth mobile OS some of the recognition it deserves.

*Credits*

Parts of this post was inspired by this reddit comment by ddtro who explained the UI thread and real-time issue:
ddtron comments on Facts and fiction about Android graphics rendering & UI smoothness

This explanation of Android versus iOS UI compositing on Hacker News by Corun was illuminating:
Android's graphics problems are not due to a lack of hardware acceleration. They... | Hacker News

Information about Android’s historical roots taken from _In the Plex_ by +Steven Levy and Steve Jobs by Walter Isaacson


----------



## Neuron (Dec 6, 2011)

What about renaming this thread to 'Sygeek's Daily'?


----------



## Vyom (Dec 6, 2011)

@sygeek: That was a nice article about the lags in Android UI. As a recent owner of an Android device, I had started to wonder, if O1 is a best device because of those lags. But the article clears my suspicion! 

Thanks again, for the share.


----------



## Sarath (Dec 7, 2011)

Now I know why symbian felt faster


----------



## sygeek (Dec 12, 2011)

*The Dos And Don'ts Of Time Travel ​*By Jim Behrle​


Spoiler



*www.theawl.com/wp-content/uploads/2011/10/TimeTravel_short-e1320084224895.jpg​
So you’ve hooked electrodes and power couplings to an old-fashioned carousel in an abandoned amusement park on the outskirts of town. Or you’ve outfitted a Harley-Davidson with a flux capacitor—a classic. Or, my personal favorite, you’re using depleted uranium to turn the underused freight elevator in your building into a time-ship. As a soon-to-be time traveler, the last thing you want is somebody telling you “Do this!” and “Don’t do that!” You're about to become a pirate on the open waves of the ocean of time. Good for you! It's sure to be a wonderful adventure. One no doubt filled with romance, knowledge and treasure. But here, humbly, are a few things to keep in mind.

*DO go forward in time first*. No matter how stable you think your time machine is, your first jump should always be into the future. It’s a mistake to visit President Lincoln on your maiden voyage. The past is loud, smelly and dangerous. And without at least one pit stop in the future, the road backwards is a million times more difficult. Imagine getting one good jump out of your device and then getting stuck in, say, 1861. You’d have to live out the rest of your life in the dark past. They didn’t even have a sun until the 1840s. Great, if you are some kind of wild history nerd. But you have no resources. You probably don’t have the right kind of money. Clothes, forget it. Even Civil War reenactors are flushed out within seconds in the past. It’s best, no matter how flushed with megalomaniacal power the creation of a time machine has made you, that you go first into the future to get all the latest updates and then start thinking about venturing into the past. The Future is Your Friend. Think of it as a great big safe house for time travelers filled with strangers who may not be thrilled to help you, but probably will point you in the right direction. After all, time traveling is no big deal there. You remember how cool you felt when you suffered under the illusion that you were the only one you knew who had the new iPhone? In the future, iPhones aren’t very cool. And time machines are a commonplace of everyday life. Like a blender or a teleporter. They’ll know how to hook you up and get you ready for your journey back in time.

*DO be wary of the past*. In fact, it’s probably best to avoid Going Back In Time your first few trips out. As enticing an idea as it might be to track down the Buddha or watch Jesus die on the cross, let’s work up to those, okay? Aramaic isn’t exactly going to be falling off your tongue as a beginner. And you’ll find it’s the little things that will cause the misunderstandings that will get you nailed to a cross right next to your pal Jesus. They have plenty of trees to nail you to in the past; it’s no problem to add one more crazy-talking future freak to the crucifixion party.

There are some things in the past you simply cannot prepare yourself for. The smell. The weird diseases. Everyone’s voice seems really squeaky for some reason. And people are really short. Also, this is probably the most surprising thing, it’s practically a 24/7 grab-ass in the past. Man, woman, child. You will get used to it, but it's initially pretty strange.

*www.theawl.com/wp-content/uploads/2011/10/TimeTravel_SmellOfPast-e1320083234276.jpg​*DO leave a note*. The key to time travel is to always let a friend know where you are. Chances are, you will be killed thousands of times in the past and have your time machine stolen a thousand more. It’s embarrassing, but it happens to us all. Your time machine itself will work against you here—it's tough to hide a red-and-white-striped carousel in The Real Jurassic Park. Do you want to be the time-travelling equivalent of James Franco having to chew his own arm off in order to escape the boulder in the canyon in that movie? No? Well, leave a note then. This holds true whether you’re setting off on a quest to alter the catastrophic course of history—or just taking a weekend off to hang out in the Nigerian countryside in 3 BC. Always leave a note. About where you are, what you did, what you think you changed and the changes you have to make in the future. Maybe even make appointments with your other time-traveling pals for Brunch in Paris in the '20s. If you don’t show up they’ll probably figure you’re dead or captured and will put it on their To Do List to track you down and help. Whenever they get around to it. Which brings us to...

*www.theawl.com/wp-content/uploads/2011/10/TimeTravel_note-e1320083172580.jpg​*DON'T be surprised that all your time-traveling friends are flakes*. You’ll find that time travelers are world-class procrastinators. And why not, right? They’ve got all the time in the world and a million chances to get everything just right. It's not surprising that such people would develop a leisurely sense of pace. “Oh, you have been captured by a Mongol Army? OK, I will definitely get over there after a few weeks on the beaches of Atlantis.” That kind of thing. Time travelers, although they need you to watch their backs, do not need to help you right away.

*DO get killed. DON’T Get Captured*! Being killed in the past is better than being captured. You know how every episode of “Dr. Who” would be greatly sped up if the Doctor simply carried a gun and refused to be taken prisoner? Being taken hostage is a generally unpleasant experience. And the problem is that even if your time-traveling friends warn you over brunch that you should not go to Ancient Rome because on this trip you will be fed to lions, you won’t listen. Instead, you'll think, “Well, knowing that I will be more careful and make sure not to get taken prisoner.” Which will, through some overly-cautious sidesteps you make in response to this knowledge, probably lead right to your capture. And you could be captured for a while. And just because you later erase the past it doesn’t mean you will forget it, what with all the being chewed on by rats and beaten with medieval wifflebats.

Your time-traveling friends may eventually get around to helping you out of captivity, but, as discussed above, they’re most likely flakes. Who knows if they’ll even show up the same day you got captured, or if they’ll leave you in there to rot? “Oh, I thought you said August 1901! Not August 1701!” You certainly could rely on yourself to help yourself. By sending yourself back to one moment in the past 40 or 50 times you will have a pretty good posse of yourself there to handle most problems. Some time travelers are able to do this with regularity and effectiveness. How do you think this whole Occupy Wall Street thing started in the first place? But what tends to happen to time travelers over the years is that they grow more aloof—and less tied to their firmest of beliefs. When you time travel a lot, you start to see all sides of most arguments. You become a bit of a flake. And with all the time in the world, you rarely feel like doing the little things you promised yourself you would do. “I’ll get to that trap door eventually.” And then next thing you know, you’re 99 years old, on a beach getting busy with the King of France, and you have one of those Should Have Had a V-8 Head Smacking moments.

*DON'T be too much of a perfectionist*. Here’s something lots of time travelers do: get trapped in a situation, say at Ford's Theatre, and think, well, I’ll make myself come back here earlier in the day and make sure that I have a weapon taped under my seat. It might take you 30 or 40 tries to get things just right. But even then it’s questionable whether even having a weapon made things easier or harder in the first place. With all the power of time travel and infinite amounts of do-overs, time travelers tend to get a little bit paranoid about every little thing. They want everything to turn out just right, with no awkward moments or embarrassing scenarios. Remember: no one really knows you in the past. They’re not going to tweet all their friends if your toga falls off in front of Caesar or whatever. You don’t need to get everything exactly right. It’s just never going to happen. Even after a million tries you’re still not going to impress that lady or dude with the perfect line. They either like you or they don’t. There are lots of fish in the sea. (This is even more true when you consider that all the people in the past will now be within reach.)

*www.theawl.com/wp-content/uploads/2011/10/TimeTravel_Right-e1320083364929.jpg​
*DO feel free to be*. Falling in love is OK. Don’t worry about knocking up people in the past or wonder if by impregnating someone you are changing the time-space continuum. It’s a mistake to think that you’re all that important to the flow of anything. Step on a butterfly in the past and maybe it gives the chance for another butterfly to land on a flower. Things tend to work out the same way, eventually. The Yankees won the World Series 40 times the first time through this current time narrative. Time travelers have all just compromised at 27 and left it at that. You know, whatever. I was originally shocked that time travelers had allowed many of the most heinous acts of human evil to go unchanged. I mean, imagine if Hitler had been stopped. Well, it has been imagined. Millions of times over. And it doesn’t mean that World War II and the Holocaust can't be averted. They just haven’t been yet. “Yet” is a very powerful concept to the time traveler, you’ll find. It has endless possibilities. Nothing is decided. And when they write the history books you’ll find even those are written in erasable ink.

*DON’T worry about creating alternate universes or destroying the timeline*. Really, don’t sweat it. No small thing you do—like, choosing the hashbrown casserole over grits at Ye Olde Historic Cracker Barrel—is going to set off a chain reaction that will unravel the present as we know it and threaten the very existence of everyone reading this article. That whole Gwyneth Paltrow misses a subway and opens up a wormhole which ruins her life thing is complete crap. Relax. You are here reading this. So, OK. If Einstein was wrong about the possibility of time travel in the first place (whether he was wrong or just flat-out lied about it for his own reasons, we may never know). He said that if you can’t travel faster than the speed of light then time travel is impossible. Well, roll over, Al. You apparently missed the whole neutrino thing. Alternative universes and broken timelines, well, let’s just say the science isn’t in. Yet. Get a hundred time travelers in a room together and you’ll have to listen to a lot of stories of how “in their experience” the past is this and not that. Getting time travelers to agree on anything is pretty pointless. They act even more entitled and righteous and professorial than elected officials.

*www.theawl.com/wp-content/uploads/2011/10/TimeTravel_ScrewingUpPresent-e1320083507863.jpg​
*DO take precautions*. In an emergency it’s really most important to keep cool. It’s a good idea to keep an apartment in a neutral place during a peaceful time. You’ll find time travel to be exhausting, and you will need a place you know you can chillax. Time travel is also pretty addictive, so you'll need to find a way to allow yourself enough of a rest. Food in the past is mostly disgusting and will make you pretty sick at first. There is no good coffee practically anywhere. And, if you’re a drinker, you may be prone to drunk time-dialing. Always know where you are. Always leave yourself a note. Have you ever woken up someplace and not known how you got there? Multiply that by any place in time, any where in the world. Protect yourself at all times. You never know if someone you meet is Jack the Ripper, so just assume they are. You don’t have to live your life as a time traveler in secret. There are plenty of people in the past who get that time travelers exist and will be interested in your travels. And there are many others who will want to use all the information you have for their own benefit.

*DO make money*. If you’re low on money, the best way to get more is to gamble. Knowing how sports events, gladiator fights or dice will fall is a big benefit. Don’t forget to sometimes lose; you'll attract less attention that way. Think of it as the price of doing business in the past. You can also rob banks. And, if you want, give the money back down the road. They’ll never know. You can travel to Macy’s Herald Square location on Christmas and take, say, $50,000 cash from the safe. And bring it back down the road when you’re flush. Take out stock in some crappy company like Google and then sell it just before it goes belly-up. But, like, if you’re going back in time to commit armed robbery and you leave behind a giant trail of dead bodies, the more cleaning up you’re going to have to do. If you’re going back in time to be a mass-murderer, you’re wasting your precious adventure time. Time flies, literally, when you’re time traveling. And you’ll never get to do all the things you want to do if you are wasting it cleaning up after your poor decisions.

*DON'T bring a friend*. It’s tough to bring people with you, even ones you completely trust. Never mind that Dr. Who and Companions thing he has going. He is not the time traveler to model yourself after. I mean, celery pinned to your lapel? He attracts way too much attention. And he has seemingly infinite lives to play with as he infinitely renews himself in new hot young actor bodies. You, on the other hand, can die lots of times and be saved by your pals, but you will always have just the same one non-actor body. You will continue to get old and frail and fat while the Doctor will transform into another hot young actor. Why hasn’t Dr. Who turned into a woman? Because there would be no show; most women are too smart to get themselves into the stupid predicaments that the Doctor does. You may think it will be impressive to some friend of yours to bring them back in time to meet, I don’t know, Napoleon? But you’ll find that adding pals to your traveling party increases the danger of someone doing something stupid. Like getting drunk and taking off with your time machine. Time travel tends to be kind of a solo thing. Let your friends get their own time machines and have their own adventures. Which you can discuss over brunches in 1920's Paris.

*DO be serene about what you can change and what you can't*. Nothing that is done cannot be undone. And the world, the past and the future is waiting for you. It’s OK to feel nervous and a little overanxious. The past and even the future will ultimately be a little disappointing in some ways. And breathtaking in others. Try to enjoy yourself—but quietly, without drawing too much attention to yourself. Some people, like me, might put our poo in plastic, go to the zoo and chuck it at the monkeys. But that’s only if you really like trouble. And most people can do without trouble entirely. Time travelers are around us all the time, seeing their favorite movies in theaters and maybe just riding the Q train for kicks. If you get in trouble in the past, they might even lend you a hand. Most Americans want to meet Lincoln, for some reason. Possibly the hat. If you see him, say hello. You could warn him about the play, but who hasn’t. He’s as stubborn as any time traveler. And some people prefer to let history ride.

*DON'T go looking for yourself*. Also, be careful about visiting yourself in the past. You’ll find arguing with who you used to be to be an incredibly unpleasant experience. Trust me, you won’t want to listen to your time-traveling ass. You, Old You that is, may see Future You and think it's important to stay the course so you can become a time traveler (alter it and you might go into stamp collecting instead). You can’t talk yourself out of dating certain people for the most part: the Past You will resent the Future You for interfering. It might even make Past You want to date Person You Shouldn't Date even more. And let's face it: Some people are just attracted to terrible people. Just because you can travel through time doesn’t mean you can control it. Some things just have to happen. Some mistakes need to be made. When the team you’re not rooting for is about to score the winning touchdown, you don’t jump on the field and tackle them. That would just make things worse. Time Travelers don’t have a Hippocratic oath, and “harm” is pretty relative, but the old adage holds true for time travelers and is generally just a good policy to have: “Don’t **** with what you don’t understand.” There’s a certain zen quality to letting things happen. And to figuring things out for yourself. Enjoy the time it takes you, and where time takes you!

*www.theawl.com/wp-content/uploads/2011/10/TimeTravel_BewareThePast-e1320083421567.jpg​


----------



## Vyom (Dec 12, 2011)

^^
You couldn't have posted this Awesome article on Time Travel at an appropriate time!
Cause, today released Men In Black 3 trailer!! 
And guess what....


Spoiler



It would be related to....


Spoiler



*Time Travel!*


Spoiler



 
I am excited as hell!


----------



## sygeek (Dec 16, 2011)

*Famous Last Words by Bosses I've Had​*


Spoiler



Sorry to say, none of these were made up...

Me, 124 Monday lunches in a row: We need an adequate disaster recovery plan.
Boss: We do. We back up every day.
Me:   What happens when we try to restore one of those backups?
Boss: I don't know. Why?

Me:   Where's the test plan?
Boss: Jerry will make sure Fred's program works.

Me:   Where's the "Expected Results" section on the test plan?
Boss: What?

Me:   I don't have access to the production server.
Boss: I already emailed you your password.
Me:   I know, but I don't know my login.
Boss: What's a login?

Me:   That doesn't make any sense. Have the auditors approved it?
Boss: No, but we can't have everything.

Boss: I'm really upset that no one has updated me on Project 127.
Me:   I cc'd you on all 9 Project 127 emails I sent this week.
Boss: I haven't had time to get caught up on my email.

Me:   You've been invited to a meeting with 3 department heads to hash out their differences on Project 249.
Boss: I hate meetings.

Boss: Why haven't you started the Accounts Receivable project yet?
Me:   Because management has not yet decided whether customer credit limits should be per division or companywide.
Boss: What difference does that make?

Boss: We have hundreds of past due orders.
Me:   No, we have 22 past due orders.
Boss: I'm not going to argue with you.
Me:   Good, because you'd lose.

Me:   We're meeting with the customer at 8:00 a.m. tomorrow.
Boss: I hate mornings.

Me:   The server crashed. IT Services is working to bring it back up.
Boss: Don't confuse me with all these technical details.

Me:   The customer didn't receive that information because that product is not on our computer.
Boss: Give me a list of all products not on our computer.

Boss: Why haven't you started Project 193 yet?
Me:   Because the customer has not yet committed to the specs.
Boss: What difference does that make?

Me:   The program was written with 3 SQL selects inside a loop. It ran OK when we had 500 parts. Now that we have 10,000 parts, it runs real slow.
Boss: I don't understand.

Boss: What are you working on?
Me:   Project 432, which you said was my top priority. Remember?
Boss: No.

Boss: Why aren't you working on Project 387?
Me:   Because you said not to work on anything else until Project 432 was complete. Remember?
Boss: No.

Boss: I'm giving you only enhancements. I'm outsourcing all of the bug fixes.
Me:   But this is a bug fix. It says so right here on the ticket.
Boss: Oh, I didn't have time to read the ticket.

Boss: Amazon is threatening to shut us down because we ship too many orders late. How do we fix this?
Me:   Ship every order on time.
Boss: No, I meant, "How do we fix this with software?"

Boss, on December 31: Write a program to close every work order so we make our year end numbers.
Boss, on January 3:   Why is the database so screwed up?

Boss: You did great this year. I'm giving you a 2% increase.
Me:   I hate you. I quit.
Boss: Then I'll give you a 4% increase.
Me:   I still hate you. I still quit.


----------



## sygeek (Dec 19, 2011)

*My Favorite Strange Number: Ω (classic repost)*
By Mark C. Chu-Carroll​


Spoiler



_I'm away on vacation this week, taking my kids to Disney World. Since I'm not likely to have time to write while I'm away, I'm taking the opportunity to re-run an old classic series of posts on numbers, which were first posted in the summer of 2006. These posts are mildly revised.
_
Ω is my own personal favorite transcendental number. Ω isn't really a specific number, but rather a family of related numbers with bizarre properties. It's the one real transcendental number that I know of that comes from the theory of computation, that is important, and that expresses meaningful fundamental mathematical properties. It's also deeply non-computable; meaning that not only is it non-computable, but even computing meta-information about it is non-computable. And yet, it's _almost _computable. It's just all around awfully cool.

So. What is it Ω?

It's sometimes called the _halting probability_. The idea of it is that it encodes the _probability_ that a given infinitely long random bit string contains a prefix that represents a halting program.

It's a strange notion, and I'm going to take a few paragraphs to try to elaborate on what that means, before I go into detail about how the number is generated, and what sorts of bizarre properties it has.

Remember that in the theory of computation, one of the most fundamental results is the non-computability of _the halting problem_. The halting problem is the question "Given a program P and input I, if I run P on I, will it ever stop?" You cannot write a program that reads an arbitrary P and I and gives you the answer to the halting problem. It's impossible. And what's more, the statement that the halting problem is not computable is actually equivalent to the fundamental statement of Gödel's incompleteness theorem.

Ω is something related to the halting problem, but stranger. The fundamental question of Ω is: if you hand me a string of 0s and 1s, and I'm allowed to look at it one bit at a time, what's the probability that eventually the part that I've seen will be a program that _eventually_ stops?

When you look at this definition, your reaction should be "Huh? Who cares?"

The catch is that this number - this probability - is a number which is easy to define; it's not computable; it's completely _uncompressible_; it's _normal_.

Let's take a moment and look at those properties:

    Non-computable: no program can compute Ω. In fact, beyond a certain value N (which is non-computable!), you cannot compute the value of _any bit_ of Ω.
    Uncompressible: there is no way to represent Ω in a non-infinite way; in fact, there is no way to represent _any_ _substring_ of Ω in less bits than there are in that substring.
    Normal: the digits of Ω are completely random and unpatterned; the value of and digit in Ω is equally likely to be a zero or a one; any selected _pair_ of digits is equally likely to be any of the 4 possibilities 00, 01, 10, 100; and so on.

So, now we know a little bit about why Ω is so strange, but we still haven't really defined it precisely. What is Ω? How does it get these bizarre properties?

Well, as I said at the beginning, Ω isn't a single number; it's a family of numbers. The value of _an_ Ω is based on two things: an effective (that is, turing equivalent) computing device; and a prefix-free encoding of programs for that computing device as strings of bits.

(The prefix-free bit is important, and it's also probably not familiar to most people, so let's take a moment to define it. A prefix-free encoding is an encoding where for any given string which is valid in the encoding, no prefix of that string is a valid string in the encoding. If you're familiar with data compression, Huffman codes are a common example of a prefix-free encoding.)

So let's assume we have a computing device, which we'll call φ. We'll write the result of running φ on a program encoding as the binary number p as &phi(p). And let's assume that we've set up φ so that it only accepts programs in a prefix-free encoding, which we'll call ε; and the set of programs coded in ε, we'll call Ε; and we'll write φ(p)↓ to mean φ(p) halts. Then we can define Ω as:
Ωφ,ε = Σp ∈ Ε|p↓ 2-len(p)

So: for each program in our prefix-free encoding, if it halts, we add 2-length(p) to Ω.

Ω is an incredibly odd number. As I said before, it's random, uncompressible, and has a fully normal distribution of digits. But where it gets fascinatingly strange is when you start considering its computability properties.

Ω is _definable_. We can (and have) provided a specific, precise definition of it. We've even described a _procedure_ by which you can conceptually generate it. But despite that, it's deeply uncomputable. There are procedures where we can compute a finite prefix of it. But there's a limit: there's a point beyond which we cannot compute _any_ digits of it. _And_ there is no way to compute that point. So, for example, there's a very interesting paper where the authors computed the value of Ω for a Minsky machine; they were able to compute 84 bits of it; but the last 20 are _unreliable_, because it's uncertain whether or not they're actually beyond the limit, and so they may be wrong. But we can't tell!

What does Ω mean? It's actually something quite meaningful. It's a number that encodes some of the very deepest information about _what_ it's possible to compute. It gives a way to measure the probability of computability. In a very real sense, it represents the overall nature and structure of computability in terms of a discrete probability.

Ω is actually even the basis of a halting oracle - that is, if you knew the value of Ω, then you could easily write a program which solves the halting problem!

Ω is also an amazingly dense container of information - as an infinitely long number and so thoroughly non-compressible, it contains an unmeasurable quantity of information. And we can't even figure out what most of it is!


----------



## sygeek (Jan 26, 2012)

*Cars Kill Cities*

*Cars Kill Cities​*


Spoiler



OK, I’m finally getting a chance to make another post.  I have temporarily relocated to Mountain View, CA and have been up to my eyeballs in work, both ‘real’ work and research work.  It’s nice to get back to this blog.

Cars do not belong in cities.  A standard American sedan can comfortably hold 4+ adults w/ luggage, can travel in excess of 100 miles per hour, and can travel 300+ miles at a time without stopping to refuel.  These are all great things if you are traveling long distances between cities.  If you are going by yourself to pickup your dry cleaning, then cars are insanely over-engineered for the task.  It’s like hammering in a nail with a diesel-powered pile driver.   To achieve all these feats (high capacity, high speed, and long range driving), cars must be large and powered by fossil fuels.  So when you get a few hundred (or thousand) cars squeezed onto narrow city streets, you are left with snarled traffic and stifling smog.

Even if you ignore the pollution, cars simply take up too much space.   Next time you are stuck in traffic behind what seems like a million cars, try to imagine if all those cars where replaced by pedestrians or bike riders.  Suddenly, the congestion is gone.

*i.imgur.com/WmUbb.jpg​
But why am I complaining about traffic?  Traffic only affects those stuck in it, right?  Once all cars go electric, essentially eliminating inter-city air pollution, then there will be no more problems for pedestrians, right?  Wrong!!  Probably the biggest problem with cars in cities is that they require huge amounts of land for storage (a.k.a. parking).  Here is a photo of Midtown Atlanta between 5th street and 12th street.  This is one of the densest and most pedestrian-friendly ares in the entire state of Georgia.  The red blocks indicate parcels of land that are 100% dedicated to car storage.

*i.imgur.com/Uu4Qs.jpg​
Dedicating all this land to car storage basically reduces the density by about half, doubles the average distance between locations, and reduces walkability.  Throw in the 16-lane interstate and the 45+ mph traffic on most of these streets, it becomes exceedingly hard to believe that this is one of the most walkable areas in the entire state.  Such is life for pedestrians in a car-dominated city.

It wasn’t always this way.  Atlanta, like all cities, used to be walkable and people actually lived IN the city instead of commuting 50 miles every day.  But as more people moved away from the city, the more Atlanta had to become like a suburb, being retrofitted to handle all the automobile infrastructure required by a million 40 hour-a-week temporary citizens.  The result of this retrofit is a wasteland of asphalt and isolated neighborhoods, a slow decimation that has rolled along since the innovation of the automobile.

Contrary to how it may sound, I do not want to rid the earth of cars.  I just want to use them smarter.  Do you really need a 2-ton vehicle to pickup your dry-cleaning?  Probably not.  Although I do see the appeal in loading a family of 6 into an SUV and traveling to Florida for vacation.  That is a totally reasonable use of an automobile.  What I really want  is clean, walkable, safe, affordable, and family-friendly cities and towns.  In a strange way, I kind of want to live in Mayberry.

In the next _post_*, I promise to discuss a few ideas that may get us a little closer to this goal.

_*The thread will be updated when the next post is up_


----------



## nisargshah95 (Jan 30, 2012)

*Re: Cars Kill Cities*



sygeek said:


> *Cars Kill Cities​*
> 
> 
> Spoiler
> ...


Long time since your last post dude. Looking for some more articles...


----------



## sekhar.mld (Feb 5, 2012)

The articles are good.
I read few.
It is always good to put some good facts and ideas into your brain, makes it more active.


----------



## sygeek (Feb 18, 2012)

*Did You Hear We Got Osama?*​By Roshan Choxi​


Spoiler



I think I got my first computer in 2004 when I went to a boarding school in Illinois. The thing that really blew my mind about it was the unlimited amount of information that I just wished I could absorb into my brain. ”I’m going to know everything there is to know,” I told myself as I subscribed to The New York Times, The Economist, Wall Street Journal, Wired, and at least a dozen other blogs on Google Reader. Politics, economics, science, whatever your choice of topic was I would never be caught uninformed.

It’s only in hindsight that I see what felt like self improvement at the time was actually the beginning of a terrible addiction. It got overwhelming really fast. I added filters, I tried a dozen different RSS readers, I even learned to speed read over the course of a few years just to keep pace with my reader. This was serious business. How would I keep up to date with the Human Genome Project and Bush’s latest folly if I started slacking on my Reader queue? Over time, I’d get better at reading quickly to the point where I was just skimming headlines. Technology would improve so I could consume news within a 140 character limit and stash the important items to “Read it Later”.

*roshfu.com/images/ConsumerWhore.jpg​_(Image from Don Hertzfeldt's "Rejected")​_
My freshman year of college, Obama was running for president and the murmurings of the debt crisis had begun. Despite the 20-30 articles I consumed every day and the catch phrases I learned to repeat in front of my friends to sound knowledgable, I still had no idea what was actually going on.

I turned 18 and chose not to vote that year. I got less and less vocal about my political opinions. And, most importantly, I phased out the majority of my RSS reader. Goodbye to Wired, The Wall Street Journal, The New York Times, The Economist, and Google News. I was ready to see what would happen if I turned it all off and embraced not knowing anything about current events and the world at large.

What followed were the most productive three years of my life.

That’s when I learned an important truth about news. Whether it’s TechCrunch, The New York Times, Wired, or Fox News: their job isn’t to educate or inform you, it’s to entertain you. You’re not reading them because you think you’ll be more knowledgable and informed, you’re reading them because you want to be distracted – because consuming has a more immediate reward than creating.

Did you guys hear about the Path scandal? Did you know Kickstarter is completely changing the fundraising landscape? “That’s just tech gossip, I don’t read the gossip,” you might say. Well what about the 30 articles that come up each day about how to do do X better? It doesn’t even matter if each of those posts had a fantastic message, do you really think your reading an article can replace the deliberate effort it takes to do anything better?

Say that you somehow didn’t know we found and killed Osama Bin Laden last year, I claim that your life would be virtually the same if you did. What if you didn’t even know who the current president was? Besides for the social embarassment, would your day-to-day be any different? If you followed the news daily, when it came time to cast your ballot could you convince yourself that you’re making an informed vote and not just one based on questionable media factoids and the dogma of your closest social circle?
*
But this post isn’t about politics, it’s about noise*. I realize there’s some irony in the medium of this message being a blog post. I’m not advocating that you pack up canned beans and a Snuggie and go off the grid, just turn down the noise in your life. I’ve gotten my sources of consumption (I don’t even call it “news” anymore) down to just HackerNews, and I probably check it twice a day on average and only read one or two posts. You may not be impressed, but for me chopping my consumption down from 50 tweets, 10 blog posts, 15 news articles, and a couple dozen Facebook posts to just two HackerNews posts took effort and time.

Give it a try. You’ll be surprised how much of the world you can tune out with no negative side effects.


----------



## nisargshah95 (Dec 23, 2012)

Man, need some articles over here...

Man, need some articles over here...


----------



## aaruni (Dec 17, 2014)

sygeek said:


> Spoiler
> 
> 
> 
> ...



Saw that on reddit sometime back. Didn't have the time to  read it full. You have the link to the solution ?


----------



## Vyom (Dec 17, 2014)

Insert number of month in A1 in excel sheet and this formula in any other cell:

```
=28+MOD((A1+FLOOR(A1,8)),2)+MOD(2,A1)+2*FLOOR(1,A1)
```

Source: A Formula for the Number of Days in Each Month Â· Curtis McEnroe


----------



## sygeek (Dec 17, 2014)

aaruni said:


> Saw that on reddit sometime back. Didn't have the time to  read it full. You have the link to the solution ?


oops, sorry. I accidentally posted it. Post is still in draft.

- - - Updated - - -

*A Formula for the Number of Days in Each Month​*_Curtis McEnroe​_


Spoiler



Recently, after being awake for longer than I should have, I started thinking about methods of remembering the number of days in each month of the year. There is a rhyme for it, and a way to count on your knuckles, but these didn’t satisfy me. I wondered if there was a mathematical formula for the problem, and upon not immediately finding one, I challenged myself to create one.

Put more formally In other words, the challenge was this: find a function f, such that f(x) is equal to the number of days in month x, represented by the integers 1 through 12. Or, as a table of values:[SUP]1[/SUP]

x123456789101112f(x)312831303130313130313031

If you want to give this challenge a go before reading my solution, now is your chance. If you’d rather see my complete solution right away, scroll to the bottom of the page. What follows is my process for solving the problem.

*The Tools*
Firstly, here’s a quick refresher on two operations I found vital to solving the problem: floor division and modulo.

Floor division is the operation performed by many programming languages when dividing two integer numbers, that is, the result of the division is truncated to the integer part. I will represent floor division as ⌊[SUP]a[/SUP]⁄[SUB]b[/SUB]⌋, for example:
*⌊[SUP]5[/SUP]⁄[SUB]3[/SUB]⌋ = 1*

Modulo is an operation that results in the remainder of a division. It is represented in many programming languages with the % operator. I will represent it as a mod b, for example:

*3 mod 2 = 1*

Note that modulo has the same precedence as division.

*The Basics*
With those tools in mind, let’s get a basic pattern going.2 Months usually alternate between lengths of 30 and 31 days. We can use x mod 2 to get an alternating pattern of 1 and 0, then just add our constant base number of days:

*f(x) = 30 + x mod 2
*

x123456789101112f(x)313031303130313031303130

That’s a pretty good start! We’ve already got January and March through July done. February is its own special problem we’ll deal with later. The problem after July is that the pattern should skip one, and the rest of the months should follow the alternating pattern inversely.

To obtain an inverse pattern of alternating 0 and 1, we can add 1 to our dividend:

*f(x) = 30 + (x + 1) mod 2*


x123456789101112f(x)303130313031303130313031
Now we have August through December right, but the rest of the year is wrong as expected. Let’s see how we can combine combine our two formulas.

*Masking*
What we need here is basically a piece-wise function, but that’s just no fun. This got me thinking of other ways to use a part of a function only over a certain domain.

I figured the easiest way to do this would be to find an expression equal to 1 over the desired domain and 0 otherwise. Multiplying a term by this expression will result in the term being cancelled out outside its domain. I’ve called this “masking,” since it involves creating a sort of bit-mask.

To mask the latter piece of our function, we need an expression equal to 1 where 8 ≤ x ≤ 12. Floor division by 8 is perfect for this, since all our values are less than 16


x123456789101112⌊[SUP]x[/SUP]⁄[SUB]8[/SUB]⌋000000011111
Now if we substitute this expression in our x + 1 dividend, we can invert the pattern using our mask:

*f(x) = 30 + (x + ⌊[SUP]x[/SUP]⁄[SUB]8[/SUB]⌋) mod 2*


x123456789101112f(x)313031303130313130313031
Woot! Everything is correct except February. What a surprise.

*February*
Every month has either 30 or 31 days, but February has 28 (leap years are out of scope).[SUP]3[/SUP] February currently has 30 days according to our formula, so an expression equal to 2 when x = 2 would be ideal for subtraction.

The best I could come up with was 2 mod x, which gives us a sort of mask over every month after February.


x1234567891011122 mod x002222222222
With this, we’ll need to change our base constant to 28 so that adding 2 to the rest of the months will still be correct.

*f(x) = 28 + (x + ⌊[SUP]x[/SUP]⁄[SUB]8[/SUB]⌋) mod 2 + 2 mod x*


x123456789101112f(x)293031303130313130313031

Unfortunately, January is now 2 days short. Luckily, finding an expression that will apply to only the first month is easy: floored inverse of x. Now just multiply that by 2 and we get the final formula:

*f(x) = 28 + (x + ⌊[SUP]x[/SUP]⁄[SUB]8[/SUB]⌋) mod 2 + 2 mod x + 2 ⌊1⁄x⌋*


x123456789101112f(x)312831303130313130313031

*Conclusion*
_2014-12-06: Hello, Internet. This is tongue-in-cheek. Why would anyone use this?
_
There you have it, a formula for the number of days in each month using simple arithmetic. So next time you find yourself wondering how many days are in September, just remember to apply f(9). For ease of use, here’s a JavaScript one-liner:


```
function f(x) { return 28 + (x + Math.floor(x/8)) % 2 + 2 % x + 2 * Math.floor(1/x); }
```


----------



## Vyom (Dec 17, 2014)

^^ OMG. We thought Sygeek is actually trying to find the formula. So I googled and came up with the exact article, that Sygeek was intended to post initially!


----------



## sygeek (Dec 18, 2014)

Vyom said:


> ^^ OMG. We thought Sygeek is actually trying to find the formula. So I googled and came up with the exact article, that Sygeek was intended to post initially!


Nah, I tried finding a solution but I just wanted to post the article first.


----------



## Anorion (Dec 19, 2014)

How to Automatically Back Up and Purge Your Gmail Every 30 Days


----------



## Vyom (Dec 27, 2014)

Wow! Much Awesome Articles! Linked on One Page!
Most Popular Features and Essays of 2014


----------



## sygeek (Apr 14, 2020)

*Your statement is 100% correct but misses the entire point*

_Jussi_​


Spoiler



Let's assume that there is a discussion going on on the Internet about programming languages. One of the design points that come up is a garbage collector. One participant mentions the advantages of garbage collection with something like this:


> Garbage collectors are nice and save a lot of work. If your application does not have strict latency requirements, not having to care about memory management is liberating and can improve developer efficiency by a lot.


This is a fairly neutral statement that most people would agree with, even if they work on code that has strict real time requirements. Yet, inevitably, someone will present this counterpoint.


> No! If you have dangling references memory is never freed and you have to fix that by doing manual memory management _anyway_. Garbage collectors do not magically fix all bugs.


If you read through the sentences carefully you'll notice that every asserted statement in it is true. That is what makes it so frustrating to argue against. Most people with engineering backgrounds are quite willing to admit they are wrong when presented with evidence that their statements are not correct. This does not cover everyone, of course, as some people are quite willing to violently disagree with any and all facts that are in conflict with their pre-held beliefs. We'll ignore those people for the purpose of this post.

While true, that single sentence ignores all of the larger context of the issue, which contains points like the following:


Dangling reference out of memory errors are rare (maybe 1 in 10 programs?) whereas regular memory bugs like use-after-free, double free, off by one errors etc are very common (100-1000 in every program).
Modern GCs have very good profilers, finding dangling references is a lot easier than debugging stack corruptions.
Being able create things on a whim and just drop them to the floor makes programmers a lot more productive than forcing them to micromanage the complete life cycle of every single resource.
Even if you encounter a dangling reference issue, fixing it probably takes less time than would have gone to fixing memory corruption issues in a GCless version of the same app.
In brief, the actual sentence is true but misses the entire point of the comment they are replying to. This is sadly common on Internet debates. Let's see some examples.

*Computer security*
A statement like this:


> Using HTTPS on all web traffic is good for security and anonymity.


 might be countered with something like this:


> That provides no real security, if the NSA want your data they will break into your apartment and get it.


This statement is again absolutely true. On the other hand if you are not the leader of a nation state or do regular business with international drug cartels, you are unlikely to be the target of a directed NSA offensive.

If you think that this is a stupid point that nobody would ever make, I agree with you completely. I have also seen it used in the real world. I wish I hadn't.

*Bugs are caused by incompetents*
High level programming languages are nice.


> Programming languages that guard against buffer overruns is great for security and ease of development.


But not for everyone.


> You can achieve the exact same thing in C, you just have to be careful.


This is again true. If every single developer on a code base is being 100% focused and 100% careful 100% of the time, then bug free code _is_ possible.  Reality has shown time and time again that it is not possible, human beings are simply not capable of operating flawlessly for extended periods of time.

*Yagni? What Yagni?*
There's the simple.


> Processing text files with Python is really nice and simple.


And not so simple.


> Python is a complete joke, it will fail hard when you need to process ten million files a second on an embedded microcontroller using at most 2 k of RAM.


Yes. Yes it does. In that use case it would be the wrong choice. You are absolutely correct. Thank you for your insight, good sir, here is a shiny solid gold medal to commemorate your important contribution to this discussion.

*What could be the cause of this?*
The one thing that school trains you for is that being right is what matters. If you get the answers right in your test, then you get a good grade. Get them wrong and you don't. Maybe this frame of mind "sticks on" once you leave school, especially given that most people who post these kinds of comments seem to be from the "smarter" end of the spectrum (personal opinion, not based on any actual research). In the real world being right is not a merit by itself. In any debate being right is important, of course, but the much more important feature is being _relevant_. That requires understanding the wider context and possibly admitting that something that is the most important thing in the world to you personally, might be completely irrelevant for the issue at hand.

Being right is easy. Being relevant is extremely difficult.


----------



## whitestar_999 (Apr 14, 2020)

^^Good one.


----------



## sygeek (May 12, 2020)

*The Cylops Child*
_Fredric Neuman M.D._​


Spoiler



Probably every physician can think of one patient who affected him more than any other. The patient who has haunted me through the years was a child that I saw for only a little time at the very beginning of my career. I was an intern at a Catholic institution. I mention that because it seems to me relevant to the ethical considerations that swirled about the care of this infant. When this child was born, the obstetrician looking at it was horrified. It was a “monster.” That was the medical term used to describe a grossly misshapen baby. The doctor was concerned, then, first of all, about the effect on its mother of seeing the child. Therefore, he told the parents that it was born dead; and that the body had been disposed of. But the child was alive. This particular “monster” had deformities that were not consistent with it living for any length of time. The obstetrician must have recognized that immediately and chose to spare the parents the special anguish of looking at and knowing about this abnormal birth. But did he have the right to tell them a lie about such a critical matter? I’m not sure that there is a law to deal with such a strange situation, but I am sure the obstetrician violated medical canons. He short-circuited the parents’ wishes and concerns. Plainly, they had the right to know the truth. If a medical malpractice action had been instituted, the doctor would have been liable. By telling this lie, he was risking his career. The other people in the delivery suite were also complicit and also liable. As far as I was concerned, however, he had done the right thing.​​There are ethical rules that govern our behavior. Sometimes, they are unspoken. They go without saying. Thou shalt not lie. Thou shalt not murder. Even those peoples who have not heard of the Ten Commandments know these rules. But there are not just ten rules or commandments. As social situations change and develop, so do these rules. There are rules, sometimes codified, sometimes not, that govern how we deal with fellow-workers, elderly parents, strangers, people we communicate with over the internet, and so on. In an important sense, all the rules of courtesy are ethical rules. They grow out of a fundamental idea: that we are responsible for and answerable to other people. There are some, of course, who regard these rules as God-given and embodied in various religious texts such as the Bible or the Koran. But even those who have no religious beliefs would find themselves usually in agreement with the ethical rules embodied in these texts. Not without exception, but for the most part.​​​In my opinion, these ethical rules sum to one principle: unethical behavior is behavior that hurts, or has the potential to hurt, other people. There is only one good: kindness, and one evil, cruelty. Ordinarily one does not lie, for instance, but might ethically do so if it served the purpose of helping someone, rather than hurting someone as it usually does. By this admittedly vague criterion any particular act, thievery, deceit, even murder, could be ethical. There are extraordinary circumstances when rules break down, and even the rules that govern when it is proper to break other rules, break down. At such times, an ethically driven person might entertain the idea of doing something that in almost all circumstances is forbidden. He does it usually by himself. He presumes to act even though he knows other people might condemn his actions. Doctors confront these situations sometimes. For instance, a different obstetrician, finding himself delivering a baby such as the one described above, might smother the baby before anyone had the chance to see it. Such things happen. They are not publicized because it is important to keep the rules in place. No woman wants to deliver a child thinking that the obstetrician, on his own initiative, might choose to kill the child. Most people like to think that there are no exceptions to these rules, but they are people who have not had to confront these choices themselves. It suits them to be definite. They think, what’s to stop some arrogant and idiotic person from taking it upon himself to do awful things? In that respect, they are right. I like to think that there are some who have the courage to make wise and selfless decisions, but there are others who take it upon themselves to violate these rules for no reason at all.​​​For instance, earlier that same year I was making evening rounds and discovered that one woman, who was 70 years old, had not eaten or been given fluids for two days. I had a pleasant conversation with her, and then I started an I.V. The woman was a private patient of one of the attending physicians. He called me in a huff the following morning.​​​“Why did you give that woman fluids? I hadn’t ordered anything.”​​“She hadn’t had anything in two days.”​​“She’s 70 years old, for God’s sake. It’s time for her to go!”​​I knew, of course, of doctors who hastened the demise of painfully, and fatally, ill patients; but this woman was not suffering. She was not senile, and she didn’t even have a fatal illness. This guy decided for whatever reason that she was old enough to die!​​When I was in medical school, the medical service at Bellevue Hospital would fill up with elderly patients who could not, for one reason or another, be placed in nursing homes promptly. They took up space that sicker patients, and more instructive patients, could be using. Mondays, following weekends when a particular resident was on call, it was sometimes discovered that one or more of these patients had died. The medical staff joked that this resident had conducted “death rounds,” meaning that he had killed them. I have no reason to believe that was so, but the fact that it could be the subject of a joke indicated that no one thought it was impossible.​​​But, these situations, thankfully, are rare. How to handle them cannot be squeezed into a comfortable formula. These are situations when the conventional thing is to do one thing, and the morally correct thing is to do something different. I can tell you from personal experience that at such times the person deciding these matters feels he is the wise and enlightened person described above, and not the arbitrary and arrogant person that someone else might be.​​​Let me describe the Cyclops child. It had a single fused eye in the middle of its forehead. The irises pointed to the sides. There seemed to be four lids surrounding the eye like a box. It was blind, of course. A large part of the brain and head were missing. There was no nose. On investigation, it turned out that the baby’s esophagus and trachea had not separated, so that feeding the child was impossible. The food would go directly into the lungs. Also, the child had extra fingers. It did not look like a baby. It did not even look like a doll. It was unworldly. Alien. It was, someone said, “one of God’s little jokes.”​​As an intern, I was very busy; but I looked in briefly to see this very unusual child before it died, Everyone expected its death to be imminent. In the meantime it existed in some kind of legal limbo, no name, no family. As far as the hospital went, it did not exist. But there it was.​​I rotated onto pediatrics a few days later, and the baby was still there. Still alive. Because it did not look like a human being, most of the time no one was disturbed by it; until it cried! Then it sounded like any other baby. It was hungry, and it could not be fed. Picking it up would not stop the crying. After a while, the staff spent as much time as possible on the other end of the ward. It was agonizing to me. Human beings are not constructed to listen to a crying baby and do nothing. And I felt sorry for the nurses and the rest of the staff. As the days went by without the baby dying, I began to wonder, just how long can a baby live without being fed? I did not know. Every day, when I went to the ward I hoped the baby would be dead, but it lived on.​​​The resident told me during rounds that he wanted me to treat the baby’s extra fingers.​​“Why?” I said. “The baby is going to die.”​​“Well, you might as well use this opportunity as a learning experience.”​​That sort of made sense to me. I was planning to be a psychiatrist, and I did not envision ever having to treat someone’s extra fingers; but much of what I did as an intern had very little to do with psychiatry.​​​The way you treat a baby’s extra fingers is to tie a ligature, a string, as tight as you can around the base of the finger. The blood supply is cut off, and after a while the finger falls off.​​When I went over to the baby, it was lying quietly in its bed. It did not object when I picked up its hand. But when I tied the ligature around its finger and pulled tightly, it screamed.​​My God, what was I doing, I suddenly thought. My hands began to shake. The kid was in pain. It could feel pain. I should have realized that, but somehow I did not. It was because the baby did not really look like a baby, I thought. I put the child down and retreated out of earshot.​​Later that day, I went to the library to look up this particular kind of birth defect. To my surprise, a number of cases had been reported previously. Most of them died within a relatively short period of time, but one Cyclops child lived for a year! I knew this baby wasn’t going to live for a year without being fed; but it was possible somebody might decide to pass a stomach tube, for the same reason I was asked to amputate its finger, for the experience. I found myself suddenly in a rage. What was the point of taking care of this baby? There was a price to be paid. Dying though it might be, the staff still had to tend to it, to change it, to clean it, to hold it in repeated attempts to comfort it. The baby was suffering, and so was everyone else. Earlier, I had caught an aide crying. A couple of nurses had stayed home that day. It was at that point that I began to think about killing the baby.​​​I realized right away that there were some problems involved in killing someone, some practical problems and some psychological problems. The practical problems, in this case, involved finding a way to be alone with the child. It lay in some kind of crib off to one side on the ward, where visiting parents were not likely to see it. But it was always in plain view of the nursing station. Some of the nurses were nuns. I thought they might object on principle to my killing one of the patients. My best opportunity would have been when I was amputating his finger, but the thought did not occur to me then.​​​The psychological difficulties were obvious. I did not know how anyone managed to kill anyone else. I was always afraid of hurting my patients. For that reason, I had trouble drawing blood or passing tubes. The only way I could imagine killing this baby was by putting my hand over its mouth and smothering it. Could I possibly do that? Besides, smothering leaves tell-tall signs, small petechial hemorrhages on the skin and ruptured blood vessels in the eyes, or eye. I could not imagine anyone doing a pathological examination on this baby; but I definitely did not want to put myself at risk to save the staff from having a bad time for another indeterminate period of time. Still, they were having a really bad time.​​​I went to the ward that night even though I was feeling a little sick and discovered that the baby had died. It was gone. Someone had beat me to it, I thought. But that was unlikely. Probably the baby starved to death, like it was supposed to.​​The next few days, I found myself thinking obsessively about how I would have placed my hand on the baby’s mouth. Could I have really done that? Probably not. But maybe. The scene played out in my mind over and over.​​​Over all the years that followed, I found myself thinking from time to time of that picture, my hand over the baby’s mouth. I knew then, and I still think now, that the right thing to do would have been to kill that baby. It wasn’t really a baby; it just sounded like a baby—that's what I tell myself.  But I would like to stop thinking about it. After all, the whole thing happened over fifty years ago.(c) Fredric Neuman 2012 Follow Dr. Neuman's blog at fredricneumanmd.com/blog​


----------



## sygeek (Mar 5, 2021)

That Is Not How Your Brain Works


----------



## sygeek (Jan 30, 2022)

The internet changed my life


----------



## Desmond (Jan 30, 2022)

sygeek said:


> The internet changed my life


Nice article. But still the internet back then wasn't a dumpster fire as it is today.


----------

