Monday, April 1, 2013

The Real Story Behind Google’s Street View Program

This morning Google signed a consent decree with the US Federal Trade Commission to avoid a lawsuit over privacy concerns caused by Google’s Street View program, which covertly collected electronic data including the location of unsecured WiFi access points and the contents of users’ web access logs. As part of the settlement, the FTC released a huge collection of sworn depositions that were taken from Google employees during its investigation. I’ve been wading through the depositions, and they give tantalizing hints of a deeper data collection plan by Google. Here’s what I found...

–We shouldn’t be surprised that there was an Android angle to the data collection. Most smartphones contain motion sensors, which when tuned to eliminate background noise can detect the pulse of the user. That was being combined with the phone's location and time data, and automatically reported to a centralized Google server.

Using this data, Google could map the excitement of crowds of people at any place and time. This was intended for use in an automated competitor to Yelp. By measuring biometric arousal of people at various locations, Google could automatically identify the most interesting restaurants, movies, and sporting events worldwide. The system was put into secret testing in Kansas City, but problems arose when it had trouble differentiating between the causes of arousal in crowds. This resulted in several unfortunate incidents in which the Google system routed adventurous diners into 24-hour fitness gyms and knife fights at biker bars. According to the papers I saw, Google is now planning to kill the project, a process that will involve announcing it worldwide with a big wave of publicity and then terminating it nine months later.

–Tech Crunch reported about six months ago that Google was renting time on NASA’s network of earth-observation satellites. This was assumed to be a way to increase the accuracy of Google Maps. What wasn’t reported at the time was that Google was also renting time on the National Security Agency’s high resolution photography satellites, the ones that can read a newspaper from low Earth orbit. Apparently the NSA needed money from Google to overcome the federal sequester, and Google wanted a boost for Google+ in its endless battle with Facebook.

Google’s apparent plan was to automate the drudgery of creating status posts for Google+ users. Instead of using your cameraphone to photograph your lunch or something cute you saw on the street, Google would track your smartphone’s location and use the spy satellites to automatically capture and post photographs of any plate-shaped object in front of you, and any dog, cat, or squirrel that passed within ten feet of you. An additional feature would enable your friends to automatically reply “looks yummy” or “awww so cute.” (An advanced option would also insert random comments about Taylor Swift.) Google estimated that automating these functions would add an extra hour and 23 minutes to the average user’s work day, increasing world GDP by three points if everyone switched from Facebook to Google+.

–The other big news to me was the project’s tie-in to Google Glass, the company’s intelligent glasses. Glass doesn’t just monitor everything the user looks at and says; a sensor in Glass also measures pupil dilation, which can be correlated to determine the user’s emotional response to everything around them. This has obvious value to advertisers, who can automatically track brand affinity and reactions to advertisements. What isn’t widely known is that Glass can also feed ideas and emotions into the user’s brain. By carefully modulating the signals from Glass’s wireless transceiver, Google can directly stimulate targeted parts of the brainstem. This can be used to, for example, make you feel a wave of love when you see a Buick, or to feel a wave of nausea when you look at the wrong brand of beer.

This can sometimes cause cognitive problems. For example, during early tests Google found that force-fitting the concepts of “love” and “Buick” caused potentially fatal neurological damage to people under age 40. The papers said Google was working on age filters to overcome this problem.

Although today’s Glass products can only crudely affect emotions, the depositions gave vague hints that Google plans to upgrade the interface to enable full two-way communication with the minds of Glass users. (This explains Google's acquisition of the startup Spitr in 2010, which had been puzzling me.) The Glass-based thought transfer system could enable people to telepathically control Google’s planned fleet of moon-exploring robots. It may also be used to incorporate Glass users into the Singularity overmind when it emerges from Google’s server farms, which is apparently scheduled for sometime in March of 2017.

Posted April 1, 2013

The ghosts of April Firsts past: 
2012: Twitter at Gettysburg
2011:  The microwave hairdryer, and four other colossal tech failures you've never heard of
2010:  The Yahoo-New York Times merger
2009:  The US government's tech industry bailout
2008:  Survey: 27% of early iPhone adopters wear it attached to a body piercing
2007:  Twitter + telepathy = Spitr, the ultimate social network
2006:  Google buys Sprint

No comments :

Post a Comment