Sponsoer by :

Saturday, July 23, 2011

The Latest from TechCrunch

Sponsored

The Latest from TechCrunch

Link to TechCrunch

I Don’t Want To Be A Diversity Candidate

Posted: 23 Jul 2011 07:31 AM PDT

Editor's note: Guest author Bindu Reddy is the CEO of MyLikes, a word-of-mouth ad network funded by former Googlers

When we were raising our angel round, I had a phone conversation with a prominent Silicon Valley investor who did not have time to meet me face-to-face but was interested in investing in MyLikes because I was a female entrepreneur—aka the “diversity candidate.”

While it is difficult to say no to money, especially when someone is giving it to you without even listening to what it is that you are doing, I felt insulted and unhappy.  I felt that I was competent enough to raise money and build a successful business regardless of my gender, not because of it.

In all fairness, this angel and many other supporters of women in technology have good intentions. However, they don't realize that by calling out someone's gender they make the system less meritocratic.

Coming from India, I have a personal perspective on the unintended consequences of such policies. My alma mater (the Indian Institute of Technology) is a highly meritocratic institution—admission is based on a completely objective criteria: stack ranking in a single entrance examination taken by students all across the country.

However, there is one exception: a certain number of seats are reserved for students from castes who have been historically discriminated against. It helps in some cases by providing opportunities to people who could really use them, but in most instances it simply does not work. It undermines the really good people who would have been admitted without the quota and causes a lot of insecurity and stress amongst people who don't have the ability to cope in a highly competitive environment.  There is also a lot of anger and resentment from others who just missed getting admission as well.

Stepping back, at a more fundamental level, I am not really sure we should worry about the lack of women in tech any more than worrying about why there are not more female truck drivers or more male nurses.

Women and men are different.  Even in an ideal world, where women and men have the freedom to choose what they want to do, without any prejudices or social bias, we will continue to have male and female dominated professions.

Fundamentally, people will gravitate towards professions and careers that they are good at or have an innate advantage at.

That said, we are still far from that perfect world. Women tend to get paid less than men even if they perform equally well and there is no denying that there are still many biases against women even in professions that they are likely to be better at. While I do think we should do what we can to foster gender equality, I don't believe preferential treatment or having diversity quotas is the answer.

Quotas always tend to be bad for everyone concerned in the long run—the female candidate who got the job because she was a woman, the hiring manager who may have compromised with a B player and the rest of the team who will always harbor the thought—"she is where she is, because she is a woman." Worst of all it does a real disservice to the women who are simply better at their jobs.



Rethinking Lists, Groups and Circles

Posted: 23 Jul 2011 06:30 AM PDT

Editor’s note: Yoav Shoham is professor of computer science at Stanford University and co-founder of Katango, which organizes Facebook friends into groups

The recent introduction of Google+ has been fodder for much Google-versus-Facebook discussion. At the center of the discussion has been the Circles component of Google+, which allows users to arrange their contacts in meaningful clusters (for example, "family" and "work") and share different content with different clusters. Circles play a role that's almost entirely analogous to Facebook's lists, which have been around (if somewhat buried in the Facebook UI) for a long time. Facebook of course also has the notion of groups, separate from (and more recent than) lists. Here are some basic observations on lists, groups and circles that seem to have been glossed over in the various recent articles.

  1. This recent discussion has focused on the differences between the Facebook and Google offerings, but misses what I think is a more basic common – and striking – feature. They both ask the user to create groups/lists/circles manually. This works fine for groups with a small and stable membership; family, for example. But it's a non-solution for large and/or fluid groups.
  2. About six months ago I went through the exercise of sorting my then-321 Facebook friends into lists. It was excruciating. It took me over an hour to do a halfway decent job, and I wasn't fun to be around when I was done. I'm now up to 388 friends; you couldn't pay me to go through that exercise again. Facebook statistics confirm that I'm not alone (only 5% of the users have created lists, for example).
  3. Facebook has had a creative solution—switch from lists to groups. The idea was that whereas only you can create and maintain your lists, a group is maintained collaboratively by all its members. In a twist on the familiar newsgroup self-subscription, in Facebook groups one needs an invitation; only existing members can add new members. (Both lists and groups are still supported on Facebook, though most users are not aware of these subtleties.)
  4. But this does not solve the problem. Groups are not lists. My lists define me socially; they are my social mirror. They are mine alone and I'll be damned if I let you touch them. I'll decide who's in my family circle, my AI cohorts, my tech-guru list, my college-friends cluster.
  5. Of course, my lists overlap with yours. Indeed some are pretty darn similar. My wife's family list has an 80% overlap with mine. This can be confusing, especially when we start using these lists to communicate. When my wife shares a photo with her family list and I with mine it can be hard to tell the two lists apart; both because the names of the lists may be identical ("Shoham family", say, with apologies to the Eliasaf brand), and because the two sets of people are almost identical. For this reason we see a natural dynamic in which people with fairly similar lists tend over time to "standardize" on the same list. An informal rule of thumb I use – not verified in any way – is that lists will merge if their overlap is 75% or greater.
  6. This doesn't mean that groups aren't important. This is especially true when there is an objectively-defined membership criterion. Membership in "Stanford class of '02" is not a matter of taste or opinion; you either were there or not (I'm sidestepping subtle issues, such as do we mean you graduated in '02 or started in '99). This group may (or may not) be part of my social mirror, but it can safely be constructed by others. Even when there's not a pre-defined membership set, a group makes sense when there is an objective, impersonal concept defining it. Anyone can add themselves (as in newsgroups) or their friends (as in Facebook groups) to the "cat lovers" group, if it's not meant to include only my cat-loving friends; it's fine for it to grow organically. Finally, as multiple similar lists coalesce over time around one list (perhaps following the 75% rule), at that point the list in effect has also been "untethered" and become a member-maintained group.
  7. (As an aside, philosophers and logicians have a lot to say about these issues, involving concepts such as "sense" and "reference", "intension" and "extension", "notation" and "denotation", but you need to like that sort of thing to spend more time on it. I do, but it's a lonely hobby.)
  8. Ok, so if groups are not the solution, how do you avoid the pain of list creation and maintenance? I believe the answer is algorithms; they do 95% of the work, and the remaining 5% is manageable and even fun. There are many algorithms that produce sets of people; clustering algorithms, of the kind powering Katango's first product release, are an example (yes, I am completely biased here). The output of the algorithm is "almost right". Almost always, each emergent cluster makes sense, but you need to hone it: name the cluster (reliable auto-naming still eludes even the best algorithms), add and remove a few names (usually more removing than adding; by design, the system usually over-includes friends since removing is a simple mouse-click away), create a sub-cluster (for example, create an "immediate family cluster" from an "extended family" one), merge clusters, and so on.  This final honing is fun rather than a chore, for several reasons. First, it's quick; a matter of minutes. Second, it's rewarding to see your social mirror emerge; one user described examining a new emergent cluster as "unwrapping a present". And third, it's your social universe, and at the end of the day you know it best. The algorithm did the heavy lifting, but like an expert surgeon, you stepped in and made it perfect; you feel in control.
  9. In an interesting recent TechCrunch piece, Tom Anderson lauded the fact that Google+ hands the user control over his/her social experience, and lamented Facebook's over-reliance on EdgeRank-style algorithms to decide what information the user ought to see. I think that's only half justified. One cannot master the social torrent without some algorithmic assistance, any more than one can navigate the web without algorithmic assistance. There's a reason we use Google more often than Yahoo to find relevant web pages. But as I say above, I agree with Tom that at the end of the day the user must be given final control of his/her social interaction.

So, the main takeaways:  Don't confuse lists or circles with groups; and let algorithms do the heavy lifting.



Daily Crunch: High Chair

Posted: 23 Jul 2011 01:00 AM PDT

7 Ways Twitter Could Be Winning Local

Posted: 22 Jul 2011 11:24 PM PDT

Editor’s note: The following guest post is written by Victor Wong, the CEO of PaperG, a local advertising technology company.

Conquering "local" remains one of the largest opportunities on the Internet today, and it seems as
though Twitter's unique position has gone largely unnoticed. Today, Twitter is an amazing tool for
connecting people to the world, but it hasn’t yet successfully connected people to places they care
about. If Twitter chose to bridge that gap, though, higher user engagement and even monetization
would likely follow.

1. Twitter Places
What happened to Twitter Places? In 2010, Twitter created place pages for local businesses (called Twitter Places), but they were lost in the new redesign. The initial concept, while lacking visibility and utility, provided a good blueprint for how Twitter could better serve its local businesses by differentiating them from regular users.

The first step for Twitter will be in figuring out how to distinguish personal accounts from business accounts. Done right, Twitter Places could become the go-to source for information about any business (after all, businesses are more likely to update their Twitter page over their website). A new Twitter Places could link place pages with corresponding business accounts, as well as aggregate content from other local sources similar to what Google does with its Place Pages.

2. Where To Follow
In addition to “Who to follow,” Twitter should create a “Where to follow” section which would surface suggested Twitter place pages, thereby increasing the visibility of Twitter Places. Not only would it increase user engagement, this feature would also generate possible ad inventory for a “Promoted Places” product which would be the local equivalent of the “Promoted Accounts” already being sold.

3. Place Trends
Twitter should be the ultimate federator of check-ins, both explicit (Foursquare, geo-tagged tweets) and more implicit (Instagram). Most location based services already broadcast information to Twitter, which remains impartial without its own competing service, unlike Facebook. By aggregating broadcasted location data, Twitter can actually organize what’s going on in a given neighborhood or venue and show trending places. This useful tool also creates yet another natural ad opportunity — Promoted Place Trends.

4. Geotargeted Tweets
For big chains, one dilemma is how to use Twitter to run local, store-specific promotions. Twitter
should enable paying advertisers to geo-target tweets to followers in a particular location. By doing so, national-local businesses with multiple locations, such as Whole Foods or Best Buy, can send out weekly specials specific to certain regions without fear of alienating users in other areas.

5. Local Alerts
Anyone who has used Twitter can tell you it's a goldmine of information about your community — if you know how to search correctly. To help people find local content, Twitter could offer "Google Alerts" whereby users get notified when keywords (places, people, etc.) occurs nearby. People can then be immediately notified when a school, child, or important local issue makes the news.

6. Loyalty Rewards
Any business wants to reward their best customers, and Twitter should make it dead-simple to reward new followers or loyal advocates. Many business owners would love to say: "Thanks for following my restaurant, here's a voucher for a free appetizer!" 3rd-party efforts, such as PaperG's Polly.IM, have already begun doing something similar with great success.

7. Promoted Retweets for Local Commerce
Imagine combining the group-buying craze with a platform as fundamentally social as Twitter. Twitter
has a commerce opportunity to enter the local deals space by making its ads truly social and offer a group buying experience. A local deal like this would spread like wildfire: "$25 for 2 Tickets to Kings of Leon. Deal is on at 50 retweets w/ 23 more to go. Spread the word with a retweet to get DM with coupon.” The coupon or actual purchasing experience could be displayed within Twitter to lower friction in transacting.

Although Twitter wasn't founded to promote local commerce, it's interesting how many pieces have
fallen in place which could allow them to become a major player in local. For Twitter to win local,
it needs to create more chances to engage users on a local level, increase usage of Twitter by local
businesses, and find natural monetization opportunities.



Lulz? The ‘Murdoch Leaks Project’ Gets A Landing Page

Posted: 22 Jul 2011 09:41 PM PDT

Over the last week, there’s been quite a bit of news swirling around Rupert Murdoch’s empire, including, most recently, the now infamous LulzSec’s pwnage of The Sun, News Corp’s daily tabloid newspaper.

On Monday, the network of merry hacktivists hacked into The Sun, pinned a fake news story about Murdoch’s supposed death on the homepage, redirected the site to its Twitter page, and brought down a number of other News Corp and News International websites — all in one fell swoop.

If that weren’t enough, on Thursday, the hacker known as “Sabu” (who is reportedly affiliated with LulzSec and Anonymous) claimed to have 4GB worth of emails, or “sun mails” that might “explode this entire case” that were lifted during the hacking. Sabu was, of course, referring to the ongoing News Corp/News Of The World scandal, in which top executives have been accused, some arrested (and on trial) for illegal phone tapping of everyone from celebrities to murder victims.

It has since been unclear whether or not LulzSec would be releasing some or any of those emails to the public, though AnonymousIRC, for one, indicated via Twitter they may not. While that assertion remains intact, we’ve just discovered this site: “MurdochLeaks.org“, which appears to be the landing page where Lulzsec and/or Anonymous may dump none, some — or all — of its News International email loot.

As of right now, the site is blank, with only a “Murdoch Leaks” heading, accompanied by the following text: “Coming soon … To volunteer with the Project contact us at 18009275@hush.com”. And, of course, a link to a Twitter account, inscribed with: “Launching soon… Making Rupert Murdoch, News Corp and News International accountable.”

These hackers sure love Twitter.

Again, to be clear, at this point it’s not evident who owns the site, but we’re looking into it. (Probably Louise Boat.) And, with “leaks” in the headline, all signs point toward this being a Lulzsec/Anon. production.

Should the site go live, we will of course update with more.



Ouch: The Netflix Price Change Hangover

Posted: 22 Jul 2011 08:51 PM PDT

It’s been pretty fascinating to watch Netflix’s growth from a company that Blockbuster laughed at in 2000 (when Founder and CEO Reed Hastings and former CFO Barry McCarthy proposed to Blockbuster management that they run its online brand) to the single largest source of web traffic in North America in 2011.

There have been quite a few hiccups and ups and downs along the way, as the on-demand video provider has struggled with Hollywood studios, succeeded as leadership has pushed its service onto TVs, game systems, and mobile devices — and more recently, re-focused on its streaming business.

Last week, that tweak to the business model saw a very public revision of the service’s pricing structure, a result of Netflix eagerly dividing its DVD rental and streaming services into two distinct businesses. In addition, Netflix created a whole separate management team for its DVD business, along with announcing that it would be offering its streaming plan for $7.99 and its DVD plan at $7.99, so that customers that want both will now have to pay $16 a month — a 60 percent price increase over its previous plan options. (For those who choose both streaming and physical.)

And, as you may have heard, customers were not happy. No, they were not happy at all. In fact, on the blog post in which Netflix announced said pricing changes, over 12,000 comments were posted (and that’s using Facebook’s commenting system, something TechCrunch readers are unhappily familiar with), most of them angry, and many in turn did their own announcing, saying they would be tendering their resignations, effective immediately.

Of course, but, so what? Well, according to YouGov’s BrandIndex, in the ten days since Netflix made its price changes, the national perception of Netflix’s brand among adults dropped precipitously from a 39.1 on July 12th to -14.1 on July 18th, and currently sits at -6, putting Netflix in a virtual tie with Blockbuster. With a margin error of 5, that’s no tiny aberration.

BrandIndex calculated it’s score by asking Netflix, Redbox, DirecTV and Blockbuster customers about their impressions of each brand, and what they’ve heard about the brand via word of mouth, advertising, etc. BrandIndex Global Managing Director Ted Marzili told me that the scores reflect a sample size of about 15,000 respondents. Look out, graph below.

Of course, it wasn’t long before Blockbuster was courting potential Netflix defectors with a 30-day free trial. And though it doesn’t seem that Blockbuster has been reaping rich rewards from Netflix’s change, the company’s stock, which has performed very well over the last two years (and was at a 6-month high before the announcement on July 13th) has since dropped over 20 points. Some of that is natural — the stock was due for a slow-down — and of course some of it’s not.

According to GigaOM, Morgan Stanley has also stepped in with its own Netflix survey, which found that, in fact, 26 percent of Netflix customers would be canceling their subscriptions altogether. These numbers have since been toned down, as the initial emotional angst wears off, but either way it seems likely that Netflix will be suffering in subscription revenue in the near future.

Netflix has now forced many of its customers to make a choice between streaming and DVDs, because, after all, as Reed Hastings himself told Erick Schonfeld back in May, the future is in streaming, not in them plastic discs. I mean who uses CDs anymore, ya know?

There is always a backlash when a major service hikes prices, but it seems that Netflix might have employed a bit more market research beforehand, and perhaps if it had given current users some form of incentive over its new users, whether through discounts on a year long plan or not, might have been smart. (And in turn shown a little consideration for the loyal customers.)

What about a discount on a 2-for-1-type deal? After all, we’re living in the age of the daily deal, when consumers seem to expect a discount. Not to mention the fact that many wallets have undergone a significant squeeze over the last two years. Money is tighter than it was during the company’s early days, and Netflix might benefit from acknowledging that.

We’ll see how this all plays out. I expect Netflix’s brand perception and stock will be right back on track before long, but there’s always the chance that the hangover continues. And, perhaps more importantly — don’t laugh — will Blockbuster truly benefit as a result? How about Redbox? The local library?

Your thoughts?



More Americans Are On Facebook Than Have A Passport

Posted: 22 Jul 2011 08:27 PM PDT

To celebrate the fact that my vacation during the last two weeks of August has been officially confirmed (!), I am posting the most massive infographic I have ever seen: “The Social Travel Revolution” brought to you by the folks at still-in-beta travel startup Tripl.

Most shocking statistic: 50% of all Americans are on Facebook (155 million) while only 37% of Americans have a passport (115 million). To its credit, the Facebook onboarding process is a lot more streamlined.




VideoInbox, Another Google/Slide Production, Brings Viral Videos To Your Inbox

Posted: 22 Jul 2011 07:33 PM PDT

We’ve come across the latest in Slide’s series of projects developed within Google, VideoInbox – a combination daily newsletter/Facebook app that basically centers around the viewing, sharing and cataloguing of viral videos (proof that it’s from Slide here). Sign up for VideoInbox with Facebook Connect and you’ll get a daily email with “hand selected” viral YouTube videos like “Slow Loris With a Tiny Umbrella,” ”Rubik’s Cube Robot Is Smarter Than You” or “Bollywood Pizza Hut”.

Again exhibiting the autonomy we’ve now come to expect from the Google-owned Slide, the app uses, amazingly enough, the Facebook API to allow you to share videos with individual friends on Facebook or post them to your Facebook Wall.  While the button is there its Twitter OAuth aspect seems to be not yet implemented. The app also allows you to watch the top 5 viral videos from yesterday, as well as “Favorite” videos for watching later.

While VideoInbox is still very “work in progress,” despite its rough design, it’s kind of delightful. I mean I am so lucky to have had the experience of “Accidental Convertible” added to my life, and yes, I just shared it with a Facebook friend that I thought might like it.

Slide has been super productive since Google acquired it for $182 million back in August, coming out with a series of iOS apps including Photovine, Pool Party and group messaging app Disco in recent months. Prizes.org, a Slide-backed platform which allows you to create contests for money, like Video Inbox, heavily implements Facebook Connect.

However it’s still unclear how Slide’s churn of products is contributing to Google’s overall ambitions and strategy. Also: Why aren’t they formally pitching the tech press with this stuff? Honestly, some of it is actually pretty cool. And it’s getting to the point where it hard to keep track of them all.



Obvious Already Ramping Up With Two New Founding Team Hires

Posted: 22 Jul 2011 07:24 PM PDT

Back in January of 2009, we noted that a “superstar team” was about to launch in the MMO space, with a startup called Ohai. A few weeks ago, Ohai was sold, as VentureBeat’s Deak Takahashi first reported. And at least two of those rockstars have now moved on. Susan Wu and Don Neufeld are the newest members of The Obvious Corporation, the idea incubator that was just re-started by the former Twitter guys, Evan Williams, Biz Stone, and Jason Goldman.

Stone makes the announcement in a post today on the Obvious blog. “The most important part of creating this work culture and building these meaningful products is people — but not just any people. People that are often smarter than us, different from us, passionate like us, and dedicated to the idea that the whole is greater than the sum of its parts,” he writes, stating that Wu and Neufeld, employees number four and five at Obvious, are those kind of people.

Like everything else with the re-launch of Obvious, this move also extends from the past. Stone writes:

Many years ago, when Ev and I were working on Odeo, we met Susan as part of Charles River Ventures, and we knew then that we wanted to work with her. We know Susan to be incredibly smart, talented, thoughtful, and driven to make a lasting, positive impact on the world. Through Susan, we met Don and quickly realized he was a rare sort of affable technical genius—an obvious fit!

They sure love those obvious plays on words.

Stone goes on to note that while both most recently worked in the gaming space (with Ohai), Wu and Neufeld bring a range of knowledge. This seems to imply that whatever Obvious is building right now, it won’t be in the gaming space.

The situation surrounding the Ohai exit is still a bit odd. While the company has been sold, at first the buyer was unknown. Then, in a separate story, Takahashi reported that the buyer was EA. Then Ohai denied this. Then they said they were “in the process of completing a transaction”. Then Takahashi heard that EA had interviewed Ohai employees and did not make a purchase at that time.

Okay, that was 11 days ago, and now most (if not all) of the founding team is gone. Something clearly happened. Regardless, Wu and Neufeld are now with Obvious.

Meanwhile, while not much is known about what Obvious will actually work on, we do hear they already do have a first product in mind that they have started. More to come, I’m sure.



Doubts About Lytro’s “Focus Later” Camera

Posted: 22 Jul 2011 06:36 PM PDT

I’ve been meaning to address this Lytro thing since it hit a few weeks ago. I wrote about omnifocus cameras as far back as 2008, and more recently in 2010, and while at the time I was more interested in the science behind the systems, though it appears that Lytro uses a different method than either of those.

Lytro has been slightly close-lipped about their camera, to say the least, though that’s understandable when your entire business revolves around proprietary hardware and processes. Some of it can be derived from Lytro founder Ren Ng’s dissertation (which is both interesting and readable), but in the meantime it remains to be shown whether these “living pictures” are truly compelling or something which will be forgotten instantly by consumers. A recent fashion shoot with model Coco Rocha, the first in-vivo demonstration of the device, is dubious evidence at best.

A prototype camera was loaned for an afternoon to photographer Eric Chen, and while the hardware itself has been carefully edited or blurred out of the making-of video, it’s clear that the device is no larger than a regular point-and-shoot, and it seems to function more or less normally, with an LCD of some sort on the back, and the usual framing techniques. No tripod required, etc. It’s worth noting that they did this in broad daylight with a gold reflector for lighting, so low light capability isn’t really addressed — but I’m getting ahead of myself.

Speaking from the perspective of a tech writer and someone interested in cameras, optics, and this sort of thing in general, I have to say the technology is absolutely amazing. But from the perspective of a photographer, I’m troubled. To start with, a large portion of the photography process has been removed — and not simply a technical part, but a creative part. There’s a reason focus is called focus and not something like “optical optimum” or “sharpness.” Focus is about making a decision as a photographer about what you’re taking a picture of. It’s clear that Ng is not of the same opinion: he describes focusing as “a chore,” and believes removing it simplifies the process. In a way, it does — the way hot dogs simplify meat. Without focus, it’s just the record of a bunch of photons. And saying it’s a revolution in photography is like saying dioramas are a revolution in sculpture.

I’m also concerned about image quality. The camera seems to be fundamentally limited to a low resolution — and by resolution I mean true definition, not just pixel count. I say fundamentally because of the way the device works. Let me get technical here for a second, though there’s a good chance I’m wrong in the particulars.

The way the device works is more or less the way I imagined it did before I read Ng’s dissertation. To be brief, the image from the main lens is broken up by a microlens array over the image sensor, and by analyzing (a complex and elegant process) how the light enters various pixel wells underneath the many microlenses (which each see a slightly different picture due to their different placements), a depth map is created along with the color and luminance maps that make up traditional digital images. Afterwards, an image can be rendered with only the objects at a selected depth level rendered in maximum clarity. The rest is shown with increasing blur, probably according to some standard curve governing depth of field falloff.

Immediately it must be perceived that an enormous amount of detail is lost, not just because you are interposing an extra optical element between the light and the sensor (and one which simultaneously must be extremely low in faults and yet is very difficult to make so), but also because the system fundamentally relies on creating semi-redundant data to compare with one another, meaning pixels are yielding less data for a final image than they would be in a traditional system. They are of course providing information of a different kind, but as far as producing a sharp, accurate image, they are doing less. Ng acknowledges this in his paper, and the reduction of a 16-megapixel sensor to a 296×296 image (a reduction of some 95.5% of the pixel count) in the prototype is testament to this reducing factor.

The process has no doubt been improved along the lines he suggests are possible: square pixels have likely been replaced with hexagonal, the lenses and pixel widths made complementary, and so on. But the limitation still means trouble, especially on the microscopic sensors being deployed to camera phones and compact point and shoots. I’ve complained before that these micro-cameras already have terrible image quality, smearing, noise, limited exposure options, and so on. The Lytro approach solves some of these problems and exacerbates others. On the whole downsampling might be an improvement, now that I think of it (the resolutions of cheap cameras exceed their resolving power immensely), but I’m worried that the cheap lenses and small size will limit Lytro’s ability to make that image as versatile as their samples — at least, for a decent price. There’s a whole chapter in Ng’s paper about correcting for micro-optical aberrations, though, so it’s not like they’re unaware of this issue. I’m also worried about the quality of the blur or bokeh, but that’s an artistic scruple unlikely to be shared by casual shooters.

The limitation of the aperture to a single opening simplifies the mechanics but also leaves control of the image to ISO and exposure length. These are both especially limited in smaller sensors, since the tiny, densely-packed photosensors can’t be relied on for high ISOs, and consequently the exposure times tend to be longer than is practical for handheld shots. Can the Lytro camera possibly gain back in post-processing what it loses in initial definition?

Lastly, and this is more of a question, I’m wondering whether these images can be made to be all the way in focus, the way a narrow aperture would show it. My guess is no; there’s a section in the paper on extending the depth of field, but I’m not sure the effect will stand scrutiny in normal-sized images. It seems to me (though I may be mistaken) that the optical inconsistencies (which, to be fair, generate parallax data and enable the 3D effect) between the different “exposures” mean that only slices can be shown at a time, or at the very least there are limitations to which slices can be selected. The fixed aperture may also put a floor on how narrow your depth of field can be. Could the effect achieved in this picture be replicated, for instance? Or would I have been unable to isolate just that quarter-inch slice of the world?

All right, I’m done being technical. My simplified objections are two in numer: first, is it really possible to reliably make decent photos with this kind of camera, as it’s intended to be implemented (i.e. as an affordable compact camera)? And second, is it really adding something that people will find worthwhile?

As to the first: designing and launching a device is no joke, and I wonder whether Ng, coming from an academic background, is prepared for the harsh realities of product. Will the team be able to make the compromises necessary to bring it to shelves, and will those compromises harm the device? They’re a smart, driven group so I don’t want to underestimate them, but what they’re attempting really is a technical feat. Distribution and presentation of these photos will have to be streamlined as well. When you think about it, a ton of the “living photo” is junk data, with the “wrong” focus or none at all. Storage space isn’t so much a problem these days, but it’s still something that needs to be looked at.

The second gives me more pause. As a photographer I’m strangely unexcited by the ostensibly revolutionary ability to change the focus. The fashion shoot, a professional production, leaves me cold. The “living photos” seem lifeless to me because they lack artistic direction. I’m afraid that people will find that most photos they want to take are in fact of the traditional type, because the opportunities presented by multiple focus points are simply few and far between. Ng thinks it simplifies the picture-taking process, but it really doesn’t. It removes the need to focus, but the problem is that we, as human beings, focus. Usually on either one thing or the whole scene. Lytro photos don’t seem to capture either of those things. They present the information from a visual experience in a way that is unfamiliar and unnatural except in very specific circumstances. A “focused” Lytro photo will never be as good as its equivalent from a traditional camera, and a “whole scene” view presents no more than you would see if the camera was stopped down. Like the compound insect eye it partially mimics, it’s amazing that it works in the first place, and its foreignness by its nature makes it intriguing, but I wouldn’t call it a step up.

“Gimmick” is far too harsh a word to use on a truly innovative and exciting technology such as Lytro’s. But I fear that will be the perception when the tools they’ve created are finally put to use. It’s new, and it’s powerful, yes, but is it something people will actually want to use? I think that, like so many high-tech toys these days, it’s more fun in theory than it is in practice.

That’s just my opinion, though. Whether I’m right or wrong will of course be determined later this year, when Lytro’s device is actually delivered, assuming they ship on time. We’ll be sure to update then (if not before; I have a feeling Ng may want to respond to this article) and get our own hands-on impressions of this interesting device.



How MySpace Tom May Have Inadvertently Triggered The Google/Facebook War

Posted: 22 Jul 2011 05:53 PM PDT

Gotta love Tom Anderson. Newly reinvigorated by the launch of Google+, “MySpace Tom” has become a social power user (and regular TechCrunch contributor!). As a man at the forefront of the early days of the social wars, he’s obviously full of information. And today he decided to share a bit more. This time, it’s a fascinating story about the time Microsoft, not Google, was about to land the MySpace ad deal.

In a comment on (where else) Google+, Anderson tells the story in response to my most recent post about the Google/Facebook war before Google+. Based on a Quora thread, I noted that the 2006 search/ad deal Google signed with MySpace (Fox Interactive Media) may have been the true kick-off of hostilities between Google and Facebook. As a result, Microsoft signed Facebook — which later led to the famous investment.

But as Anderson tells it, it almost didn’t happen that way. In fact, it was Microsoft that was just about to sign the MySpace search/ad deal. “The reason we ended up going with Google search is because I ran into John Doerr and told him we were about to close with Microsoft. Within an hour, Google brass helicoptered out to a News Corp. shindig at Pebble Beach,” Anderson says, noting that he wasn’t allowed in the closed-door meeting where negotiations took place. This resulted in the billion-dollar deal.

“The terms were so screwed up, that it had a big impact (a negative one) on MySpace’s future,” Anderson writes. “Things would have been quite different if that deal hadn’t happened,” he goes on to say.

A few more awesome things about this info:

1) Again, Anderson is leaving this comment on Google+ — the new service by the company whose ad deal way back when helped seal the fate of his company.

2) Anderson says this was actually the first and only time he had ever met Doerr.

3) Vic Gundotra, now the man in charge of the Google+ project, was on the other side at the time, trying to get the ad deal done for Microsoft (Gundotra left Microsoft for Google shortly before the MySpace deal was finalized). This is how Anderson met Gundotra, in fact.

4) Anderson says he had forgotten all of this info until my post.

Indulge me here for a second.

Just think about what would have been had Anderson not run into Doerr? Microsoft would have likely closed the MySpace deal, perhaps with better terms for MySpace. Google, presumably, would have then gone after a similar deal with Facebook. This perhaps would have given them a leg up a year later to do a Facebook investment, instead of Microsoft.

If my wild speculation holds, the Internet would have been a very different place right now. It may have been a place for Google and Facebook to be friends. In a relationship, even.



Google Acquires Facial Recognition Software Company PittPatt

Posted: 22 Jul 2011 04:37 PM PDT

Google has just acquired facial recognition software company PittPatt (Pittsburgh Pattern Recognition), according to an announcement on the startup’s site.

PittPatt, a project spawned from Carnegie Mellon University, develops a facial recognition technology that can match people across photos, videos, and more. The company has created a number of algorithms in face detection, face tracking and face recognition. PittPatt’s face detection and tracking SDK locates human faces in photographs and tracks the motion of human faces in video.

Here’s the notice PittPatt has up on its site: Joining Google is the next thrilling step in a journey that began with research at Carnegie Mellon University’s Robotics Institute in the 1990s and continued with the launching of Pittsburgh Pattern Recognition (PittPatt) in 2004. We’ve worked hard to advance the research and technology in many important ways and have seen our technology come to life in some very interesting products. At Google, computer vision technology is already at the core of many existing products (such as Image Search, YouTube, Picasa, and Goggles), so it’s a natural fit to join Google and bring the benefits of our research and technology to a wider audience. We will continue to tap the potential of computer vision in applications that range from simple photo organization to complex video and mobile applications.

Google has reportedly been exploring adding facial recognition to its products (i.e. Google Goggles) more seriously but has held back because of privacy concerns. As the company told Search Engine Land in March, Google wouldn't put out facial recognition in a mobile app unless there were very strict privacy controls in place.

But in May, Google Chairman Eric Schmidt said the company is “unlikely to employ facial recognition programs.”

Google issued this statement confirming the acquisition:

"The Pittsburgh Pattern Recognition team has developed innovative technology in the area of pattern recognition and computer vision. We think their research and technology can benefit our users in many ways, and we look forward to working with them."



Long Before Google+, Google Declared War On Facebook With OpenSocial

Posted: 22 Jul 2011 04:09 PM PDT

Google and Facebook are at war. We’ve known this for a while. Of course, neither side will admit to it, but they are. Winner takes the Internet.

After months of Facebook owning Google in just about every way imaginable (well, except search, of course — but the rise of social is slowly making search less important), Google has finally been able to strike back with Google+. And now a full-on social sharing race is getting underway. It may not be a winner-take-all race, but it will eventually be winner-take-most. We simply can’t share everything across 5 or even 3 networks. Google is fighting an uphill battle in this regard, but at least they finally have a weapon.

But how did we get to this point where the two biggest names on the Internet are involved in a full-scale war? It all goes back to 2007, and perhaps even 2006.

This question was recently posed on Quora: What specific actions led to the massive rift between Facebook and Google? No less than Adam D’Angelo, the co-founder of Quora and very early Facebook employee, chimed in.

“To me, the biggest increase in tension was Google’s launch of OpenSocial in 2007. After seeing the success of Facebook Platform, Google went and got all the other social networks committed to OpenSocial under NDA without telling Facebook, then broke the news to Facebook and tried to force them to participate,” D’Angelo writes, pointing to this TechCrunch post from the time.

Facebook, as you might expect, did not take kindly to that action. “This was particularly offensive to Facebook because Google had no direct interest in social networking at the time and Facebook Platform had no direct impact on Google’s search or ads businesses. They didn’t care about Orkut and they didn’t build any applications,” D’Angelo notes.

A few months later, Facebook banned Google Friend Connect (a part of OpenSocial), further escalating matters. Facebook then went on to dominate social (remember, MySpace was still technically the leader at that time). On top of Platform, we got Connect, Open Graph, the Like button, etc. Facebook seized control, and we began to enter the Age of Facebook.

We’ll see if Google+ can stop that. Certainly, no one talks about OpenSocial or Friend Connect any more.

D’Angelo says that he can’t remember “any adversarial actions of that magnitude” before the OpenSocial announcement. And he says that before that, there was just the regular competition over engineering hires (which continues today). But there may have been something right before OpenSocial that triggered it.

As another Facebook employee (though not at the time), Jinghao Yan, remembers, the Microsoft investment in Facebook may have also contributed heavily to the increase in tensions. While talks had been going on for weeks, if not months, on October 24, 2007 — just a week before the OpenSocial announcement — Facebook formally accepted a $240 million investment from Microsoft for less than 2 percent of the social network.

Humorously, at the time, people were all up-in-arms over the $15 billion valuation this gave Facebook. Now it looks like one of the smarter investments Microsoft has made in recent years — though it was clearly always more about the strategic positioning. And that’s the key. Microsoft outbid Google for the right to secure this investment (and thus, strategic partnership) in the rising social network.

“I feel that this event is what made Google so antagonistic against Facebook–because it actively rejected Google’s embrace for Microsoft’s purse. As a result, it labeled Facebook more as a threat to its online dominance than as a potential partner,” Yan writes.

Below, that another Facebooker, Yishan Wong, points out that the 2006 advertising deal Facebook signed with Microsoft instead of Google may have kicked all of this off. And why did Microsoft go so hard after Facebook for this deal? Because earlier that same month, Google signed a similar $1 billion deal with Fox Interactive Media to run the ads on MySpace.

In other words, Google made a bet — a good one at the time, but one that was potentially very costly long-term.

And now the two sides are giants. At war.

More: How MySpace Tom May Have Inadvertently Triggered The Google/Facebook War



Festo’s SmartBird Robot Flies Through The Air At TED

Posted: 22 Jul 2011 03:58 PM PDT

You may recall the SmartBird, a robot we saw back in March that mimics the flight of birds, flapping its wings like the real thing. The video we saw then was a bit too edited to get a feel for the bot, but luckily one of the inventors was invited to do a TED talk, and of course they had to set the thing free in the auditorium.

Check out the video:

Markus Fischer, the speaker, describes a few finer points and demonstrates the simplicity of their motor and wing system on a skeletal model. It’s really very cool. Unfortunately they are likely limited by the capacity of the batteries they can take on board, which, being heavy, increase the power required to stay aloft, which means more battery capacity is needed… and so on. The bird flies for around 50 seconds in the demonstration, but much longer in these other videos (outside, with curious real birds).

I’m curious as to whether they’ve considered alternative energy sources; they seem to be well-provided with space inside the bird chassis, and a strong but lightweight coil or spring might provide a better power to weight ratio. Batteries are optimized for volume, not weight, so if there’s room to expand, they can take a hit on joules per cm3 but shave a few grams off the total.

[via Reddit]



Founder Office Hours With Chris Dixon And Josh Kopelman: Profitably

Posted: 22 Jul 2011 03:21 PM PDT

Today, we are trying a special edition of Founder Stories; that we are calling Founder Office Hours. Inspired by Paul Graham’s Office Hours onstage at our last Techcrunch Disrupt, we brought together a group of startup founders in our NYC studio to get feedback and advice. Joining regular host Chris Dixon is Josh Kopelman, managing partner of First Round Capital.

In this first video above, Adam Neary, founder of Profitably, asks whether he should charge for a new product or go freemium. Profitably is a business dashboard for small businesses that pulls accounting data from QuickBooks and helps visualize it. The company is developing a new product around business planning and modeling that traditionally is only available to larger corporations. Should he charge a monthly fee for the new product, or go freemium—give it away for free and upsell to premium features?

It depends on what his immediate goals are: getting big or getting profitable. “Customer acquisition for small- to mid-sized businesses is the hardest thing,” notes Kopelman. “You have to market to them as consumers.” If the product has broad appeal, you can consider giving it away for free as a way to subsidize the cost of acquiring new customers. But you need to have something to upsell. “You don’t want to have too much free and not enough -emium,” he says.

What about building a white-label version for a large customers as a way to hit quarterly targets? Both Dixon and Kopelman agree that if Neary wants to raise more money down the line, investors are more likely to put a higher value the business if it has a direct relationship with the end customer.

Watch previous Founder Stories here.



Enhanced eBooks: Valuable Sales Tool or Just a Gimmick? (TCTV)

Posted: 22 Jul 2011 02:33 PM PDT

New technologies usually allow for more. In the move from print media to the Web the “more” was comments, slideshows and of course rapid-fire content. In the move from VHS to DVDs the “more” was all sorts of behind the scenes footage and director commentaries. In the move from Blackberries to iPhones, the “more” was a wonderland of new apps and a browser experience that didn’t make your eyes bleed.

In a world of eBook readers, more is starting to creep in, but it’s unclear whether this is a more that will actually sell books, or a more that only a handful of superfans care about. A lot people still attach a high-art aesthetic to books, and decry anything that makes its content more accessible for readers. Case in point: A gorgeous version of Alice in Wonderland came out on the iPad and some parents were furious that the animated images took away from kids having to imagine, say, Alice growing and shrinking on their own.

Novelist Kitty Pilgrim is betting that more is more with her new book The Explorer’s Code. A long time broadcast journalist she’s included several highly-produced videos to show the real places that inspired her fictional thriller. But does that take something away from the magic of fiction? We caught up with Pilgrim over Skype to discuss.



Leaked LG Roadmap Points To Five Android Smartphones And One Mango Fantasy

Posted: 22 Jul 2011 02:21 PM PDT

The only thing better than a leak is six leaks, which is exactly what we have for you today. Bundled nicely in the form of a 2011 LG Roadmap (discovered by PocketNow), five Android smartphones and one Mango-powered handset have found their way to the web.

Along with the recently announced Optimus Pro and Optimus Net, LG has quite a bit more in store for the rest of the year. However, we don't expect that this is the entirety of LG's 2011 smartphone lineup, so if you can't find something you like here, fret not, more are sure to follow.

The second-half flagship has been dubbed the LG Prada K2. We're not sure what "Prada-inspired texture on the casing" means, but other specs on this fashion-forward phone are pretty impressive: Android 2.3 Gingerbread, dual-core processing, 4.3-inch Nova LCD display (the power-saving extra-bright screen seen on the Optimus Black), 8-megapixel rear shooter, 1.3-megapixel front-facing camera, and 16GB of internal storage all wrapped up in an 8.8mm thin handset.

Other roadmap highlights include the LG Univa, successor to the Optimus One, and a mysterious Windows Phone 7 handset called the LG Fantasy. Little is known about either of these handsets, although it is expected that the Univa will launch alongside the Optimus Net. Despite the popularity of the Optimus One, I have a sneaking suspicion that the upgrade to an 800MHz processor, 3.5-inch HVGA display, and five-megapixel camera may put this phone ahead of big brother in initial sales.

The Fantasy, on the other hand, should hit shelves in Q4, claims Pocket Now, with Windows Phone 7.5 Mango in tow. The leaked roadmap also points to another upper-midrange smartphone called the Victor and a low-end Android handset called the LG E2, which you can check out in PocketNow’s coverage.

[via Unwired View]



Porsche’s Sport And Rennsport Bikes, For The Car-Loving Cyclist

Posted: 22 Jul 2011 02:16 PM PDT

We’ve already seen bikes from both Audi and McLaren in the last year, so I suppose it’s no surprise to see competition from Porsche. The German sport car giant has actually had a bike for quite a while now, but I believe the new Sport and Rennsport are their first attempts at road-going bikes rather than the mountain variety.

These “Driver’s Selection” bikes are of the refined and sexy type, taking more after Audi’s wood-framed models than McLaren’s highly-tuned racing bikes. The aluminum Sport or S has an 11-gear belt drive and weighs 12kg (~26 lbs), which is light but… not that light. The Rennsport (RS) is much lighter at 9kg, due no doubt to its carbon frame and forks. It’s got a 20-gear Shimano derailleur with a traditional chain, and comes with clip-in pedals. Both have Magura ceramic disc brakes.

Nice bikes to be sure, but let’s talk turkey. What’s the damage on these things? The Sport costs a massive €3300 (~$4750) and will be available in September. The Rennsport… well. Got a spare €5900? That’s $8500 of your puny American dollars. What, you thought Porsche was going downmarket?

I’ll tell you, though, if someone put ten grand in my pocket and a gun to my head and told me to buy one of these luxury bikes, I’d probably go with that McLaren. I’d be too afraid to ride it in the city, but I think I’d prefer it over these status symbols, though I have no doubt they’d be nice rides as well.

[via Born Rich]



Apple’s iOS 5 Beta 4 Update Now Available, First To Be Released Over-The-Air

Posted: 22 Jul 2011 02:05 PM PDT

It’s been just 11 days since Apple released Beta 3 of iOS 5 to developers, but a new Beta is already up in the air — literally. iOS 5 Beta 4 has just gone live, and it appears to be the first update to support installation via iOS 5′s new over-the-air update system.

We can’t actually get the update to work over the air right now, but the patch notes specifically define it as an option. To quote:

“If you are doing a OTA software update from beta 3 to beta 4, you will need to re-sync your photos with iTunes.”

If you’re not already on the iOS 5 Developer Beta to give it a shot yourself, you’re not missing out on much:

Fortunately, as shown in the image below, the update can still be downloaded manually and installed through iTunes. It’s not 100% clear whether or not Apple plans to release the OTA update today (just a day after Lion, which is currently distributed exclusively through the App Store. Way to stress test that new cloud server, Apple!), but it certainly looks like it shouldn’t be long.

Update: Readers in comments and folks on Twitter are reporting that they got the OTA update to work. Here’s what it looks like when it actually, you know, works (Thanks @FungBlog)!

So What’s New?

As any late-stage Beta should be, it’s mostly bug fixes and little tweaks — but here’s some of the bigger stuff we’re hearing:

  • The aforementioned OTA installation support
  • Video content in all applications and websites should now be AirPlay-enabled by default
  • Wireless syncing now works with Windows

This list will be updated as new reports come in.



For The Geek Who Has Everything: A Gold-Plated Atari 2600

Posted: 22 Jul 2011 01:49 PM PDT

One thing most 30-something people in tech have in common is video gaming nostalgia. Generation X (and Generation i) can go on for hours discussing the merits of our favorite Nintendo games, our programming experience in school, and of course our beloved Ataris. Sure there were C64s and Amigas and such, but Atari’s 2600 and its successors were truly groundbreaking in the gaming world.

You can still find a few here and there, working even, but to be honest the machine is a little more humble-looking than my memory has it. But Urchin Associates had the brilliant idea to preserve this piece of computing history forever… in 24-karat gold.

Look at it. Is it not beautiful? Now, whether it works or not, I’m not prepared to say. That gold-plated cartridge (I wonder what game it is?) looks removable, and I doubt they plated over the I/O ports, so unless the system they used was bricked to begin with, it probably works just fine. The controllers, however, may have lost a little functionality in the gilding process.

The whereabouts of this art project are unknown, and no, I don’t think you can buy one. But it’s nice to know that it’s out there somewhere — like Eldorado, or Bigfoot.

[via Technabob]



No comments:

Post a Comment

My Blog List