Blog post

Who’s watching you?

Every time we go online, we leave behind a digital footprint. But how worried should we be about the vast amount of information we share in this way, and how companies might be using it, asks Sam De Vere?

By The Corporate Communications Team - 9 July 2019

The data protection regulations introduced in the 1990s were struggling to cope with modern technologies and the digital footprints we were leaving whenever we used the internet. Now, thanks to the General Data Protection Regulation (GDPR) and the Data Protection Act (DPA) 2018, we enjoy more protections than ever before from companies collecting our data.

“This new legislation looks to protect individual and consumer data and our fundamental right to privacy,” says James Eaglesfield, Head of IT Governance and Portfolio Delivery at the University of Derby.

“A key principle of the legislation is that companies need to handle, or process, personal data fairly, lawfully and in a transparent manner. Instead of using our data until we opt-out, companies must be specific in how they will use the data and, where necessary, get us to opt into this.”

But companies are increasingly blurring these lines, as Richard Self, Senior Lecturer in Analytics and Governance at the University, explains: “Over time, companies learnt to be as broad as possible with their terms so they could reuse our data for all sorts of things, using data sources like Facebook or Twitter alongside their internal CRM data to see what else they could learn about us.

“They have become very good at reusing our data in ways that weren’t originally envisaged, and are using every last trace of information about us. What’s different now is that they’re not just using our data to improve their websites and our experience. They are monetising our ‘digital exhaust’ – the information we leave lying around when we go online – to sell to us.”

Using data to predict human behaviour

In her book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (1), Professor Shoshana Zuboff, the Charles Edward Wilson Professor of Business Administration, Emerita at Harvard Business School, shows how the whole basis of online capitalism is changing:

'Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to product or service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later.

Finally, these prediction products are traded in a new kind of marketplace for behavioural predictions that I call behavioural futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.'

An example of a company using data to predict human behaviour was US retailer Target. It faced widespread backlash in 2012 when the New York Times (2) reported that Target’s marketing analytics team had been using customers’ purchase data to predict whether they were pregnant and at what stage.

Using this data, it is reported that they sent money-off coupons for baby products at key points in each woman’s pregnancy, alongside items from their wider range. Things came to a head when a father complained that Target was marketing baby items to his teenage daughter, only to find out that she was in fact pregnant but hadn’t told her family.

Data and our right to be forgotten

“Another of the individual privacy enhancements that GDPR brings is the right to be forgotten,” says James. “This means that when data is no longer accurate, or consent is withdrawn, individuals can request it is deleted or removed. In an age when our digital footprint helps companies like Amazon, Google and Facebook power ads from our browsing history, this protects our individual privacy, but means a lot of work for these companies.”

This is the challenge facing IBM after it created a dataset of a million photos taken from Flickr to train its facial recognition systems to more accurately recognise black female faces. Many of the images included were taken from professional photographers who had given permission for non-commercial use but are now arguing that training artificial intelligence is commercial use.

Although Flickr users can, in theory, opt-out of the database, it is proving difficult as photographers must email links to photos they want to be removed and IBM has not publicly shared which photos were included. In any case, the dataset has already been shared with numerous academic organisations for research purposes.

The ethics of facial recognition

In 2017, researchers from Stanford University (3) took photos from social media where people, male and female, had self-identified as straight or gay. They discovered, using neural network technology, that they could identify gay males with up to 91% accuracy just by looking at their faces, and gay females with slightly less accuracy.

This poses wider questions about the ethics of facial detection technology, and its potential to violate people’s privacy. As Richard explains: “It’s very clever to be able to do that, but should they? I can think of various political regimes around the world which would like the ability to identify gay males.

“It raises an interesting debate, particularly for analytics for academic research purposes, as to where the question of ethics actually lies. Is it unethical to use such technology at immigration, for example, or was it unethical for the researchers to have done the research in the first place?”
Facebook and Cambridge Analytica

One of the biggest data scandals of recent years involved Cambridge Analytica and Facebook, with the companies accused of harvesting people’s personal profiles without their consent and using them for political campaigning.

“Cambridge Analytica used analyses of people’s profile of Facebook likes and associations to customise messages to them, reflecting their political allegiances or the issues they cared about,” explains Richard.

“Humans are mostly looking for things they can agree with – they weren’t aware that this was political advertising, and thought it was part of the newsfeed, coming from someone who thought like they did.”

Testing the protective framework

Speaking to the Committee on Civil Liberties, Justice and Home Affairs of the European Parliament Hearing on the case (4), Information Commissioner Elizabeth Denham CBE said: “…Online platforms are data controllers under data protection law.

“These organisations have control over what happens with an individual’s personal data and how it is used to filter content – they control what we see, the order in which we see it, and the algorithms that are used to determine this. Online platforms can no longer say that they are merely a platform for content; they must take responsibility for the provenance of the information that is provided to users.”

This high-profile case has been described as a watershed moment in the debate on ethical standards in online media, and the Information Commissioner has acknowledged there is still a long way to go: “There are still bigger questions to be asked and broader conversations to be had about how technology and democracy interact and whether the legal, ethical and regulatory frameworks we have in place are adequate to protect the principles on which our society is based (5).”

1 Shoshana, Z. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Public Affairs, Hachette Book Group, New York, USA. 2 New York Times, How Companies Learn Your Secrets, 16 February 2012.
3 Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Yilun Wang, Michal Kosinski, October 2018.
4 ICO opening remarks, The Committee on Civil Liberties, Justice and Home Affairs (LIBE) of the European Parliament – Hearing on the Facebook/Cambridge Analytica case, 4 June 2018. https://ico.org.uk/media/about-the-ico/documents/2259093/ ico-opening-remarks-ep-libe-facebook-cambridge-analytica-20180604.pdf
5 ICO News, ICO issues maximum £500,000 fine to Facebook for failing to protect users’ personal information, 25 October 2018. 

For further information contact the press office at pressoffice@derby.ac.uk.

About the author

The Corporate Communications Team
University Press and PR

The Corporate Communications Team manage the University's Press and PR, putting forward academics, support staff and student representatives for 'expert comment' on different topics to local and national broadcast media. The team is highly experienced in communications and journalism - locally, regionally and nationally - as well as in-house and agency public relations.

Email
pressoffice@derby.ac.uk