Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Sunday, February 24, 2019

Hidden Functionality

Consumer surveillance was in the news again this week. Apparently Google forgot to tell consumers that there was a cuckoo microphone in the Nest.

So what's new? A few years ago, people were getting worried about a microphone inside the Samsung Smart TV that would eavesdrop your conversations. (HT @Parker Higgins)

But at least in those cases we think we know which corporation is responsible. In other cases, this may not be so clear-cut. For example, who decided to install a camera into the seat-back entertainment systems used by several airlines?

And there is a much more general problem here. It is usually cheaper to use general-purpose hardware than to design special purpose hardware. For this reason, most IoT devices have far more processing power and functionality than they strictly need. This extra functionality carries two dangers. Firstly, if when the device is hacked, the functionality can be coopted for covert or malicious purposes. (For example IoT devices with weak or non-existent security can be recruited into a global botnet.) Secondly, sooner or later someone will think of a justification for switching the functionality on. (In the case of the Nest microphone Google already did, which is what alerted people to the microphone's existence.)

So who is responsible for the failure of a component to act properly, who is responsible for the limitation of purpose, and how can this responsibility be transparently enforced?

Some US politicians have started talking about a technology version of "food labelling" - so that people can avoid products and services if they are sensitive to a particular "ingredient". With physical products, this information would presumably be added to the safety leaflet that you find in the box whenever you buy anything electrical. With online services, this information should be included in the Privacy Notice, which again nobody reads. (There are various estimates about the number of weeks it would take you to read all these notices.) So clearly it is unreasonable to expect the consumer to police this kind of thing.

Just as the supermarkets have a "free from" aisle where they sell all the overpriced gluten-free food, perhaps we can ask electronics retailers to have a "connectivity-free" section, where the products can be guaranteed safe from Ray Ozzie's latest initiative, which is to build devices that connect automatically by default, rather than wait for the user to switch the connectivity on. (Hasn't he heard of privacy and security by default?)

And of course high-tech functionality is no longer limited to products that are obviously electrical. The RFID tags in your clothes may not always be deactivated when you leave the store. And for other examples of SmartClothing, check out my posts on Wearable Tech.




Nick Bastone, Google says the built-in microphone it never told Nest users about was 'never supposed to be a secret' (Business Insider, 19 February 2019)

Nick Bastone, Democratic presidential candidates are tearing into Google for the hidden Nest microphone, and calling for tech gadget 'ingredients' labels (Business Insider, 21 February 2019)

Ina Fried, Exclusive: Ray Ozzie wants to wirelessly connect the world (Axios, 22 February 2019)

Melissa Locker, Someone found cameras in Singapore Airlines’ in-flight entertainment system (Fast Company, 20 February 2019)

Ben Schoon, Nest Secure can now be turned into another Google Assistant speaker for your home (9 to 5 Google, 4 February 2019)

Related posts: Have you got Big Data in your Underwear? (December 2014), Towards the Internet of Underthings (November 2015), Pax Technica - On Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Monday, April 02, 2018

Blockchain and the Edge of Obfuscation - Privacy

According to Wikipedia,
a blockchain is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network. (Wikipedia, retrieved 31 March 2018)

Some people are concerned that the essential architecture of blockchain conflicts with the requirements of privacy, especially as represented by the EU General Data Protection Regulation (GDPR), which comes into force on 25th May 2018. In particular, it is not obvious how an immutable blockchain can cope with the requirement to allow data subjects to amend and erase personal data.


Optimists have suggested a number of compromises.

Firstly, the data may be divided between the Blockchain and another data store, known as the Offchain. If the personal data isn't actually held on the blockchain, then it's easier to amend and delete.

Secondly, the underlying meaning of the information can be "completely obfuscated". Researchers at MIT are inventing a 21st century Enigma machine, which will store "secret contracts" instead of the normal "smart contracts".

    Historical note: In the English-speaking world, Alan Turing is often credited with cracking the original Enigma machine, but it was Polish mathematicians who cracked it first.

Thirdly, there may be some wriggle-room in how the word "erasure" is interpreted. Irish entrepreneur Shane Brett thinks that this term may be transposed differently in different EU member states. (This sounds like a recipe for bureaucratic confusion.) It has been suggested that personal data could be "blacklisted" rather than actually deleted.

Finally, as reported by David Meyer, blockchain experts can just argue that GDPR is "already out of date" and hope regulators won't be too "stubborn" to "adjust" the regulation.


But the problem with these compromises is that once you dilute the pure blockchain concept, some of the supposed benefits of blockchain evaporate, and it just becomes another (resource-hungry) data store. Perhaps it is blockchain that is "already out of date".



Vitalik Buterin, Privacy on the Blockchain (Ethereum Blog, 15 January 2016)

Michèle Finck, Blockchains and the GDPR (Oxford Business Law Blog, 13 February 2018)

Josh Hall, How Blockchain could help us take back control of our privacy (The Guardian, 21 March 2018)

David Meyer, Blockchain is on a collision course with EU privacy law (IAPP, 27 February 2018) via The Next Web

Dean Steinbeck, How New EU Privacy Laws Will Impact Blockchain (Coin Telegraph, 30 March 2018)

Wikipedia: Blockchain, Enigma machine


Sunday, December 03, 2017

IOT is coming to town

You better watch out



#WatchOut Analysis of smartwatches for children (Norwegian Consumer Council, October 2017). BoingBoing comments that
Kids' smart watches are a security/privacy dumpster-fire.

Charlie Osborne, Smartwatch security fails to impress: Top devices vulnerable to cyberattack (ZDNet, 22 July 2015)

A new study into the security of smartwatches found that 100 percent of popular device models contain severe vulnerabilities.

Matt Hamblen, As smartwatches gain traction, personal data privacy worries mount (Computerworld, 22 May 2015)
Companies could use wearables to track employees' fitness, or even their whereabouts. 


You better not cry

Source: Affectiva


Rana el Kaliouby, The Mood-Aware Internet of Things (Affectiva, 24 July 2015)

Six Wearables to Track Your Emotions (A Plan For Living)

Soon it might be just as common to track your emotions with a wearable device as it is to monitor your physical health. 

Anna Umanenko, Emotion-sensing technology in the Internet of Things (Onyx Systems)


Better not pout


Shaun Moore, Fooling Facial Recognition (Medium, 26 October 2017)

Mingzhe Jiang et al, IoT-based Remote Facial Expression Monitoring System with sEMG Signal (IEEE 2016)

Facial expression recognition is studied across several fields such as human emotional intelligence in human-computer interaction to help improving machine intelligence, patient monitoring and diagnosis in clinical treatment. 


I'm telling you why


Maria Korolov, Report: Surveillance cameras most dangerous IoT devices in enterprise (CSO, 17 November 2016)

Networked security cameras are the most likely to have vulnerabilities. 

Leor Grebler, Why do IOT devices die (Medium, 3 December 2017)

IOT is coming to town


Nick Ismail, The role of the Internet of Things in developing Smart Cities (Information Age, 18 November 2016)


It's making a list And checking it twice


Daan Pepijn, Is blockchain tech the missing link for the success of IoT? (TNW, 21 September 2017)



Gonna find out Who's naughty and nice


Police Using IoT To Detect Crime (Cyber Security Intelligence, 14 Feb 2017)

James Pallister, Will the Internet of Things set family life back 100 years? (Design Council, 3 September 2015)


It sees you when you're sleeping It knows when you're awake


But don't just monitor your sleep. Understand it. The Sense app gives you instant access to everything you could want to know about your sleep. View a detailed breakdown of your sleep cycles, see what happened during your night, discover trends in your sleep quality, and more. (Hello)

Octav G, Samsung’s SLEEPsense is an IoT-enabled sleep tracker (SAM Mobile, 2 September 2015)



It knows if you've been bad or good So be good for goodness sake!


US intelligence chief: we might use the internet of things to spy on you (The Guardian, 9 Feb 2015)

Ben Rossi, IoT and free will: how artificial intelligence will trigger a new nanny state (Information Age, 7 June 2016)





Twitter Version


Related Posts

Pax Technica - The Book (November 2017)
Pax Technica - The Conference (November 2017)
Pax Technica - On Risk and Security (November 2017)
The Smell of Data (December 2017)

Updated 10 December 2017

Saturday, November 25, 2017

Pax Technica - On Risk and Security

#paxtechnica Some further thoughts arising from the @CRASSHlive conference in Cambridge on The Implications of the Internet of Things. (For a comprehensive account, see @LaurieJ's livenotes.)

Many people are worried about the security implications of the Internet of Things. The world is being swamped with cheap internet-enabled devices. As the manufacturing costs, size and power consumption of these devices are being driven down, most producers have neither the expertise not the capacity to build any kind of security into them.

One of the reasons why this problem is increasing is that it is cheaper to use a general-purpose chip than to design a special purpose chip. So most IoT devices have far more processing power and functionality than they strictly need. This extra functionality can be then coopted for covert or malicious purposes. IoT devices may easily be recruited into a global botnet, and devices from some sources may even have been covertly designed for this purpose.

Sensors are bad enough - baby monitors and sex toys. Additional concerns apply to IoT actuators - devices that can produce physical effects. For example, lightbulbs that can flash (triggering epileptic fits), thermostats that can switch on simultaneously across a city (melting the grid), centrifuges that can spin out of control (attempting to sabotage Iran's nuclear capability).

Jon Crowcroft proposed that some of this could be addressed in terms of safety and liability. Safety is a useful driver for increased regulation, and insurance companies will be looking for ways to protect themselves and their corporate customers. While driverless cars generate much discussion, similar questions of safety and liability arise from any cars containing significant quantities of new technology. What if the brake algorithm fails? And given the recent history of cheat software by car manufacturers, can we trust the car not to alter the driver logs in order to evade liability for an accident?

In many cases, the consumer can be persuaded that there are benefits from internet-enabled devices, and these benefits may depend on some level of interoperability between multiple devices. But we aren't equipped to reason about the trade-off between accessibility/usability and security/privacy.

For comparison's sake, consider a retailer who has to decide whether to place the merchandise in locked glass cases or on open shelves. Open shelves will result in more sales, but also more shoplifting. So the retailer locks up the jewelry but not the pencils or the furniture, and this is based on a common-sense balance of value and risk.

But with the Internet of Things, people generally don't have a good enough understanding of value and risk to be able to reason intelligently about this kind of trade-off. Philip Howard advises users to appreciate that devices "have an immediate function that is useful to you and an indirect function that is useful to others" (p255). But just knowing this is not enough. True security will only arise when we have the kind of transparency (or visibility or unconcealment) that I referenced in my previous post.


Related Posts

Defeating the Device Paradigm (October 2015)
Pax Technica - The Book (November 2017)
Pax Technica - The Conference (November 2017)
The Smell of Data (December 2017)
Outdated Assumptions - Connectivity Hunger (June 2018)


References

Cory Doctorow, The Coming War on General Computation (2011)

Carl Herberger, How hackers will exploit the Internet of Things in 2017 (HelpNet Security, 14 November 2016)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Laura James, Pax Technica Notes (Session 1Session 2Session 3Session 4)

Holly Robbins, The Path for Transparency for IoT Technologies (ThingsCon, June 2017)

Jack Wallen, Five nightmarish attacks that show the risks of IoT security (ZDNet, 1 June 2017)

Saturday, November 14, 2015

Towards the Internet of Underthings

#WearableTech #InternetOfThings Once upon a time, the wires in an undergarment merely provided structural support. Now, people may have all sorts of wires and wireless devices hidden under their clothing. Here are some interesting examples.

  • The Foxleaf Bra delivers cancer-fighting drugs through the wearer's skin.
  • An aunt’s death led Kemisola Bolarinwa to develop a wearable device that can pick up Nigeria’s most common cancer much earlier.
  • The @tweetingbra reminds women to examine themselves. (?)
  • The Lumo Lift helps improve posture through app-enabled coaching.
  • Various manufacturers (including Clothing+, OMsignal and SmartLife) produce health vests and sportswear packed with monitors to track your heart rate, breathing rate and the amount of calories you've burnt.

We are now encouraged to account for everything we do: footsteps, heartbeats, posture. Until recently this kind of micro-attention to oneself was regarded as slightly obsessional, nowadays it seems to be perfectly normal. And of course these data are collected, and sent to the cloud, and turned into someone else's big data. (Good luck with those privacy settings, by the way.)

If a device is classed as a medical device, it will be subject to various forms of accreditation and regulation. For this reason, many device makers will be careful to avoid any specific medical claims, but devices that offer some health advice are considered a borderline area.

Another borderline area is hi-tech underpants that protect men from the evil rays allegedly produced by all those wireless devices. Especially the radiation from mobile phones. (Including the Bluetooth that links your underwear to your smartphone.) One brand of underpants that claims to use a mesh of pure silver to create a Faraday cage around the genitals has been banned by the UK Advertising Standards Authority from making any medical claims.

Or maybe you could just switch the whole lot off.



The Wearable Medical Device in Your Future…Is Now! (Marketing Research Association, 28 April 2015)

Jennie Agg, The hi-tech bra that helps you beat breast cancer - and other clothes that can treat or prevent illness (Daily Mail, 10 March 2015)

Valentine Benjamin, Can a bra detect breast cancer? This Nigerian entrepreneur thinks so (Guardian, 9 Aug 2023)

Sarah Blackman, Student designs cancer-fighting bra (Lingerie Insight, 10 Feb 2015)

Britta O'Boyle, SmartLife clothing claims to make sure you never miss a beat (Pocket-Lint, 12 March 2015) 

Rob Crilly, Hi-tech pants "protect sperm from phone waves" (Telegraph 22 October 2014)

Julie Papanek, How Wearable Startups Can Win Big In The Medical Industry (TechCrunch, 19 Feb 2015)

Hannah Jane Parkinson, Lumo Lift review: posture-tracking gadget is a straight shooter (Guardian, 14 November 2014) 

Helen Popkin, Tweeting bra exposed: Genuine support or publicity lift? (NBC News 25 October 2013)

Meera Senthilingam, How a high-tech bra could be your next doctor (CNN, 11 May 2015)

Brendan Seibel, High-Tech Underwear for Adventurous Geeks (21 April 2010)

Mark Sweney, Hi-tech underwear advert banned (Guardian 13 August 2014)

Dan Sung, World Cancer Day - The Real Wonderbra (Wearable, 14 Feb 2015)  


Related Posts Have you got big data in your underwear (December 2014)

Thursday, May 20, 2010

Google and Received Opinion

Brilliant satire from @newsbiscuit : New Google only searches for sites that match your preconceived opinions.

now so much easier to find exactly what you want to see

I have long complained that Google provides a systematically distorted way of finding out what is going on, and encourages what A.A. Milne called Thinking with the Majority. This is because Google's page ranking algorithms are basically designed for people who want to ask the same questions as everyone else, and get the same answers. Consequently, Google helps to amplify the circulation of Received Opinion.

The distortion is further amplified by massive duplication of material from a common source. If you search for a topical story, you will often find hundreds of popular websites repeating exactly the same version of events in slightly different words, and unless you are extraordinarily persistent you may never find a website that gets its information from a different source. See my posts on Google and Spin (1, 2).


@roygrubb raises a related concern - that Google remembers and is influenced by our previous searches. This is not just a privacy issue but also a context issue - our interests may switch from one project to the next. For example, let's say I'm working on a project using SAP, so when I'm searching for technical information on this project I may concentrate on material that is relevant to SAP. But I certainly don't want Google to put an implicit SAP filter on my searches, even if some Google engineer thought this would be helpful to me, because that could seriously prejudice my view of the available technology. Worse, this bias might persist (without my knowledge) when I'm working on a completely different assignment.

I can imagine that Google could build some kind of context-awareness into its search algorithms, so it somehow detects when I move to another project. And (to take a more controversial example) if I search for information about some deadly disease, it can try and work out whether I'm suffering from the disease myself (in which case it can sell me health insurance before it's too late) or enquiring on behalf of a friend or client, or whatever. But that's not the point. The point is the increasingly complicated relationship between our tools and our knowledge, which even many technologically literate people seem touchingly naive about.

Sunday, May 18, 2008

Guardian Angel

From a recent US patent application
An intelligent personalized agent monitors, regulates, and advises a user in decision-making processes for efficiency or safety concerns. The agent monitors an environment and present characteristics of a user and analyzes such information in view of stored preferences specific to one of multiple profiles of the user. Based on the analysis, the agent can suggest or automatically implement a solution to a given issue or problem. In addition, the agent can identify another potential issue that requires attention and suggests or implements action accordingly. Furthermore, the agent can communicate with other users or devices by providing and acquiring information to assist in future decisions. All aspects of environment observation, decision assistance, and external communication can be flexibly limited or allowed as desired by the user.
Twenty investors are listed, including Gates, William H. (Medina, WA) and Ozzie, Raymond E. (Seattle, WA). The presence of these two names on the patent application is attracting some attention from the blogosphere.
  • a most unusual Microsoft patent application that should intrigue privacy advocates [TheoDP]
  • interesting and frightening at the same time [Dennis Kudin on security]
  • This sounds interesting at first glance, but also a little creepy. ... I'm not so sure I'd be terribly keen on having my device capable of some of those functions. [PDAPro.info]

There is some discussion in the comments to Bruce Schneier's blog about the extent of Bill's and Ray's contribution to this invention. Maybe it's true that Bill and Ray can attach their names to pretty much any Microsoft patent application if they choose. In which case the interesting question is what it was about this particular invention that attracted their interest. 

The name Guardian Angel is leading some commentators to view this as a security mechanism, but it is clearly intended to provide much more than security, a comprehensive mechanism to provide presence and context, which are key elements of some of the things both Bill and Ray have talked about in the past. 

There is also some discussion on Bruce's blog about the originality of the invention and the possibility of prior art. You really can't tell this from the summary though; to assess this properly, you would need to look at the whole application including the diagrams, but I haven't managed to access the diagrams. Clearly there are other companies working on mechanisms for presence and context, including the telecoms companies. I had a briefing on this very topic with Avaya recently. See my notes on Presence 2.0.

 

See also: What does a patent say? (February 2023)