# Why is so much of Silicon Valley obsessed with small ideas that don’t solve a problem?

My answer to the above question on Quora:

Compare two photo companies: Instagram and Lytro.

The technology in Instagram is mainly off-the-shelf (not to say it’s trivial, but well within reach of a good team). It’s about marketing, UX, iteration, luck and community building. Again, not trivial, but the risks are well understood, and starting the company is relatively cheap.

Lytro is a highly innovative technology transfer company creating quite literally a new kind of photography: “light field photography”. It’s an amazing idea, which will probably reshape large swathes of the industry for decades to come. And yet, they have quite high expenses and risks. They need to design, test, manufacture and market an actual digital camera, which is probably much more expensive for them than for Canon. And the camera itself is high risk–the resolution is fairly low for a very expensive device that is oddly shaped and unlike anything else on the market. It could be the right technology but people won’t buy it until they can get at least 6MP, or until it says “Canon” on the camera. Lytro might be driven into bankruptcy only to have its patents bought by someone else who successfully commercializes the technology. Or maybe the technology is wrong: the tech world is littered by technologies that looked like they were the next big thing until they suddenly weren’t: zip drives, laser discs, the Segway, etc. Very high risk. Maybe a huge return, but probably not.

Personally, I think that Instagram and Lytro are both great companies and wish them the best. But if I had to invest my money in one of them, as impressive as Lytro’s technology is, I would go with Instagram. Wouldn’t you?

Which is to say that I am relatively risk-averse, but so are most VCs. If I had an enormous pile of money, I would change my evaluation of which company is more interesting.

# Kindle Fire and Kindle Touch Review

### Kindle Fire

I bought my Kindle Fire to read lying on my back in the dark (I like to read in bed). It is much better than the iPad I for that since it is smaller and lighter. The reading experience is generally worse than the Kindle app on the iPad: it is clumsy and poorly thought out. There is no 2 column layout in landscape mode, the menus appear for seemingly no reason and are slow to disappear. Managing your queue of books is sort of annoying since it isn’t clear what’s on the device and what’s in the cloud. Syncing seems less reliable than it did on the iOS app. It is missing some key apps: the native Twitter client and Instapaper. I might switch to ReadItLater.

The hardware seems good, and the onscreen keyboard is generally reliable. The auto-correction/completion is very much inferior to iOS.

# Siri & Star Trek

The iPhone announcement yesterday prompted a lot of criticism from technology pundits. Some of it was childish, some of it intelligent. Perhaps the new features are not enough to compete with Android phones; perhaps the hardware upgrades were not really as big as they could have been; maybe it needs a bigger screen, etc. But the pundits who accuse Apple of lacking ambition don’t really understand what they’re claiming with Siri.

I’ve been a big fan of Star Trek since I was about six years old. Say what you will about the writing, acting and special effects (all of which were frequently awful), Star Trek did not lack for technological vision. Warp drive and transporters haven’t arrived, but I think the LCARS touch interface on their computers was a risky and stunningly accurate prediction. I remember thinking how unrealistic it looked, that an interface like that could never work (this was mainly because there was never anything like a keyboard). Well, Apple created something along those lines with the iOS touch interface. They built an intuitive, easy to use touch device that included always-on connectivity to a massive store of easily searchable information. They actually built something out of Star Trek. In fact, in many ways, the iPad is much better than the PADDs depicted in Star Trek 20 years ago.

There was another (and older) way of interacting with computers in Star Trek: the very intuitive (and sometimes unrealistically psychic) voice interface. That made intuitive sense the instant you saw it used: you ask the computer for what you want it to do, and it does it for you. Simple, easy, and impossibly hard to actually build. With Siri, Apple is making the claim that they are building something else from Star Trek: you talk to the computer, and it does what you want. That is not unambitious. That is not small. If they succeed (which I’m somewhat skeptical of), it will be an enormous step forward, and a fitting memorial to Steve Jobs, whose company would then have revolutionized the way we interact with computers three times, instead of the already unbelievable two times (touch and GUI).*

Personally, I’m a bit skeptical that Siri will work as well as it does in their demo and advertisement. AI is something that is quite easy to demo beautifully, but it can fail in an enormous number of little and highly visible ways. Frequently, once you step outside a narrow set of potential queries, it can completely fail.

There are two ways that I’ve seen at getting around this. The first is to just work diligently for years or decades until it becomes good enough to use in restricted (though still impressive) domains. That’s the tactic that voice recognition and OCR have taken so far. A lot of call-center menus are handled with voice recognition. It works OK, and it’s certainly easier than waiting through a list of 6 slightly different choices trying to keep track of which number was the closest to why you’re calling. With OCR and handwriting recognition, I now no longer have to fill out deposit slips at ATMs. The ATM scans the check, enters the values, and all I have to do is push OK. You can even do it with your phone’s camera at some banks. I’ve never seen it fail to work correctly, but it’s a pretty restricted domain.

The other way to get around fundamental limitations in current AI/machine learning technology is with clever UI design and expectation lowering. For example, Google’s search engine is now achieving comparatively high levels of precision, but it’s still extremely far from perfect. They’ve been so successful by (1) expectations being low** (Altavista was awful) and (2) by showing you a long list of results some of which are hopefully relevant to your query. If you see an irrelevant result, it’s pretty quick to skip them and move on. Now with Google Instant, they’re showing you even more search results per query, and I generally stop typing once it shows me a relevant result. It looks as though their precision is high because what I want is usually the top result, but they may have flashed 5-10 top results until I stopped looking. Then they have 10 more tries after I’m done typing the query before I hit the “boy this isn’t working” stage and either refine the query or click to page 2. Similarly with Stumbleupon, you can “stumble” to a new link. The recommendation is likely to be good even if it isn’t very topical because otherwise nobody would have recommended it, and if you don’t like it, the next page is a click of a button. The whole process takes a few seconds at most, and you only remember the hits, not the misses.

I don’t think speech recognition is good enough yet for the “just be good” solution, at least not without investing a lot of time training the recognizer for the specific user. Apple’s recognizer is further limited since it runs on the phone instead of on beefier servers like Google’s and Microsoft’s, and Siri is probably not a limited enough domain to truly succeed often enough.

Apple has struggled with the second solution in the past with the text correction interface. Indeed, it’s so bad that it’s a meme. I get frustrated by it at least once a day (almost every time I type something), even though it does objectively make typing easier on the device. It feels like it makes it harder. Similarly, unless Siri is extremely good at returning control to the user when it makes a mistake, users will feel like it does a bad job, even if it is actually quite helpful.

Still, even if it works poorly, so did the first iPhone, and that was revolutionary.

* Yes, these were developed by others first. They built it for consumers (GUI), or built it right (touch).
** They also introduced “I’m Feeling Lucky”, which implies that by going with their top result, you are throwing the dice. It might work out; it might not.

Update: Hijinks Ensue, as usual nails the idiocy of the technopunditocracy:

# BBC’s Genius of Design

I recently watched the excellent BBC program The Genius of Design. (Many thanks to the inimitable @uxgorilla for the recommendation.) Two parts struck me as particularly interesting. The excerpts are my own transcription. The first concerns the design and manufacture of the reputedly excellent Nazi Tiger Tank. After the British captured one in the north African desert in 1943, they commissioned a report on its design, including commentary from an experienced tank commander.

Tank commander: Well it is beautifully engineered all the way through–the shear detailed design they put into: the steering gear, the transmission, the gearbox, the suspension, all the torsion bars, the way they’d machined all the armor faces before they welded them together. Beautifully engineered, very solid job, but a waste of time.

Expert 1: There was a tradition in german engineering and design where the most sophisticated was the best. And they just. Could. Not. Stop it. And the problem for the Germans was this: that they came up with ideas for ever more powerful, ever more sophisticated tanks. But to do that they required ever more time to think them through, ever more engineering skills, ever greater cost.

Narrator: At a cost of 250K Riechmarks, the Tiger was more than twice the price of any German tank to date. It boasted power steering, sophisticated transmission, even a fully illustrated owners’ manual.* But in the grinding war of attrition that developed since Hitler’s invasion of Russion, it was a luxury item.

Expert 2: The design philosophy behind The Tiger is very much “Let’s have the best most powerful vehicle.” The downside of that philosophy is the amount of resources the Tiger takes to make. Would you be better off building something in larger quantities that’s a lot cheaper?

They then proceed to describe how the Soviets built a much cheaper tank whose production levels reached many times the number of Tiger tanks, despite being technologically inferior in every way. I’ve seen this kind of attitude on both sides as a software engineer.

On one side, pride and perfectionism are difficult to shake: “this function is a little awkward, better refactor”, “this method is an unprincipled heuristic, try something more complex but more principled”, “this code that someone else wrote sucks. I should rewrite the whole thing”, and so on. It’s very hard to stop yourself and say “let’s do the simplest thing that works.” So I have to try very hard to step back and say “What’s my goal? Is it to get something that basically works on time, or spend a lot longer getting something that’s slightly better?” Sometimes the answer is definitely that latter. Writing slow code is sometimes the right thing to do for a process that takes almost no time to run, or only needs to run once a day. Sometimes code that looks slow is actually the best you can do, and sometimes slow code is slow, and slowness sucks both for debugging and end users.

On the opposite side, I’ve seen managers who want to build an impressive sounding system that ignores the immediate business goals. Though sometimes the bigger business goals (like generating sellable IP) can be hard to see from the trenches. In short, over-optimization is hard to detect, but its effects can be very costly.

The second section that seemed especially interesting concerned the design of the graphical user interface, in particular the development of the first word processor at Xerox PARC.

Narrator: If computers were going to be used by ordinary people it suggested: “Why not ask them what they wanted?” And when asked to design a computer editing system for a publishing company, that’s exactly what this guy did.

Larry Tesler (Xerox PARC): So we had just hired a secretary and the day that she started, I put her in front of a blank screen and said “Imagine that there’s a page on the screen, and here’s a page of markups or proofreader’s marks of what needs to be changed. Imagine that you have a way to point at the screen and a keyboard, and tell me what you would do.” And she said “well, I have to delete that. I would point at it and cross it out. I have to insert some text. I would point at the place I want it to go, and then I would type it.” So she just made it up as she went along–what was intuitive to her.

And so was born the first incarnation of the modern word processor, with text selection, and a mouse controlled cursor. What’s startling about this isn’t that what a genius Tesler or the secretary was, but rather the fact that the very best designs are based on what’s intuitive a priori rather than based on creating an interface that’s easy to learn a posteriori. A big part about building a good user interface is about building it out of elements the user already understands, whether intuitively or from training elsewhere.

This point is emphasized repeatedly in a book I’ve been reading recently: Steve Krug’s excellent Don’t Make Me Think. He repeatedly emphasizes that designers of all stripes should look at conventions that have evolved over time that users are already familiar with (home page link in the upper left corner, visible navigation than emphasizes where you currently are in the hierarchy, search buttons that say “Search” or “Go”, etc.) Conventions should only be violated if you have an absolutely excellent reason: the site just won’t function by following them. Summing up his views, he says that a user interface should be self evident, but if it can’t be made that simple, it should at least be self explanatory. In this case, Tesler decided that the conventions he should follow were those of the intuitions of secretaries who would actually be using the product.

(*) This is perhaps not such a luxury. Preventing the loss of even a single tank due to operator error would probably cover the cost of generating a clearly written operator’s manual.

Over at SEOBook, Aaron Wall has a prescient quote from the Google Founders in 1998: “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers.” He then goes on to chronicle how Adwords ads are expanding both in complexity and size on organic search results pages, while the cues indicating the listings are ads are becoming increasingly subtle. I would argue that the cues are already essentially zero for most users.

For most monitors, the default color, contrast, and saturation settings are pretty messed up. Fortunately, we humans tend to be pretty good at overlooking them. Indeed, until I started doing some front-end web development projects, I had no idea how off the colors were on my HP monitor. I’ve probably spent hundreds of hours using this monitor, and hadn’t noticed the issue. At Google I’m sure they buy expensive monitors, and lots of Googlers have Macs with precalibrated, integrated screens of very high quality. Additionally, most designers spend time calibrating their displays and are trained to be very sensitive to subtle color variation. To the end user who never increased the contrast from its default setting, has a white point that’s a bit yellow, etc, the ads may appear as white or almost white–indistinguishable from the now identically styled organic links below it. In fact, it wouldn’t surprise me if their A/B testing on the background color of ads is actually inadvertently finding colors that look particularly close to white on the average miscalibrated monitor.

In fairness to Google, Bing is just as bad if not worse. I spent about half an hour doing everything in my power to calibrate my HP monitor’s colors, and still their extremely light blue background for ads was almost invisible. (And only somewhat noticeable on my iMac’s built in display.)

# Felix Salmon on the Post office

How to solve the Post Office’s problems:

It seems to me that a significant part of the problem here lies with Congress and that a massive bout of deregulation could be just the solution that the Post Office is looking for. Congress is micromanaging the Post Office, telling it how much it can raise postage rates, telling it that it can’t offer financial services (despite its huge business in money orders), telling it that it can’t get into all manner of other businesses either and telling it that it has to deliver mail on Saturdays. Astonishingly, amid all these rules and regulations, the Post Office is losing billions of dollars.

Totally agree. I check the mailbox once a week, and throw away most of the mail directly into the convenient recycle bin my apartment complex has put next to the mailboxes. I couldn’t care less if they stopped delivering on Saturday, or if, on the rare occasions when I do have to mail a letter, it cost me \$1 to do so.

# On Reddit’s success

Reddit has a post detailing how their site works and speaking about their success:

“Over the past 15 months, reddit has tripled in size. Since last May, we’ve grown from 7 million monthly unique visitors to 21.5 million. Our pageviews have exploded 4x to a staggering 1.6 billion pages served per month.

“This growth brings new diversity, new opportunities, and new challenges to our communities. There are now over 6,500 subreddits with over 100 subscribers. As we welcome new members into our communities, I’d like to take this opportunity to clarify how reddit works and what role moderators and admins play in the process.

“The most important fact is that reddit is not a single community; it’s an engine for creating communities.

The last sentence is what’s fundamental to Reddit’s success. They’ve managed to succeed despite awful uptime/reliability, financial limitations, serious turnover, and difficulty selling ad space. The fact that new subreddits are popping up all the time allows the site to fulfill niches even as some subreddits become too popular to be particularly useful. It’s happened before (Digg, Slashdot, and slowly Hacker News), but with Reddit, it can only happen locally–within a subreddit. In the subreddits I’ve followed, I’ve seen the deterioration as Reddit has gotten more popular. The CS reddit now has a ton of high school students or undergraduates asking for career advice (Should I go to graduate school? What books should I read to learn about [thing that this exact question has been asked about 50 times]?, etc.)

I still don’t think they’ve solved the problem of deterioration. Would someone start a new CS reddit because the old one kinda sucks? Well, yes, but though there are many attempts, they don’t seem to take hold. However, this formula of letting people build their own communities rather keeping things monolithic obviously has promise, and has kept the site from deteriorating as quickly as it could.

# Test post

$e^{\i \pi} + 1 = 0$

def fib(n):
return (n if n < 2 else fib(n-1) + fib(n-2))