Data privacy is a hot topic. And if 2017, taught us anything, it’s that it’s here to stay. Concerns over data privacy underpinned many dominant conversation topics in 2017 from politics to Brexit, the advertising industry and the way we navigate our lives on social media.
Marketers will be familiar with the new GDPR legislation, promising to give greater rights and protection to individuals while clamping down on rogue practises. Others will be familiar with government calls to invest more in the UK’s profitable data economy. The rise of cyber-attacks, from the 2016 American election to the NHS earlier this year – to name a few – have highlighted how vulnerable everyone is and how easily our data can be exposed. A recent survey from Com Res showed that only one fifth of UK citizens have trust and confidence in organisations storing their personal information.
To build more trust, do we also need to educate ourselves on where our data goes?
Nowadays, we’re so accustomed to signing up for apps and websites with our social media credentials that it’s second nature: we don’t bat an eyelid at the pre-checked boxes detailing what we’re agreeing to give away — whether that be our age, gender, name or location. And even if we did, getting access to so much content now relies exclusively on us giving away these seemingly impersonal trinkets of information.
Claims that “data is the new oil” have been around for over a decade. While the analogy is perhaps over-used, it perfectly captures how valuable data is as a resource and currency. According to The Economist, in the first quarter of 2017, the five most valuable firms listed globally earned over $25 billion in net profit. These were Alphabet (Google’s parent company), Amazon, Apple, Facebook and Microsoft. It’s no coincidence that these companies are also those that hold the most user data.
The current landscape is dominated by a few tech giants, whose use of our data is not always transparent despite it being used for their commercial gain. It’s unclear to most people how much data they’ve historically given away, how it is used, and how much money they have implicitly created for these corporations.
It’s almost impossible to get through a day without leaving a data trail: every time we open our phone, search Google, pay with contactless and swipe loyalty cards, we’re giving something away. This data isn’t stored on a centralised system, instead being continually traded for different purposes. It’s hard to keep tabs on where our data ends up, and even harder to enforce ethical standards.
We don’t tend to lose sleep over how this data is used, nor feel too uncomfortable at those specifically targeted ads, trying to sell us the same products we deliberated over days earlier. It’s convenient when our phone geo-tags our photographs, when our credit card details are pre-filled and when our travel apps guess where we’re travelling to, and map our route home.
Increasingly, companies are dealing content in exchange for data. On the surface, this seems like a good trade-off: there’s no monetary price for the user, they get to unlock content and companies can use the data to optimise their product. This improves the user journey, while generating more company revenue.
But, as this practice becomes more commonplace and symptomatic of a larger cultural shift, should we think about who we’re giving our data to and how they’re using it? Our personal data is increasingly owned by thousands of stakeholders and we’re unaware of exactly how they get it, and to what end. Unconsciously giving away our data serves to help big corporations grow even bigger without any discernible, personal benefits.
Signing up to a website with our name, gender and city seems innocuous but when this data is paired with that from another website or connected product, such as our interests and where we eat out, it becomes invaluable to marketers. An in-depth picture can be created, and our private data becomes public. Facebook can purportedly tell when relationships are doomed, and your car insurance company knows when and where you drive. Suddenly websites can detect behaviour we’re not even conscious of ourselves.
According to the American computer software company, Domo, 90% of all data today was created in the last two years. It feels inconceivable how the next decade may pan out, with an increasingly connected world, reliant on technology and the internet. Domo’s 2017 report also found that the world internet population has grown by 7.5% from 2016 and now represents 3.7 billion people, each of whom are trading their information every day.
It makes sense therefore, that Artificial Intelligence (AI) has boomed over the last two years, too. The sweeping advancements made by AI are only made possible by the increased amounts of data available. The early applications of AI are being used to predict behaviour both on, and offline. When our sentences are autocorrected on Google, this is only made possible by the swathes of searches stored by Google.
Data is important for marketing and product innovation, but it’s also the backbone of AI. AI needs data to exist, and to learn from. From home devices to transport systems and buildings, products are getting smarter and more intuitive based on the information being put into them. AI techniques such as machine learning can interpret data and extract meaning from it in a way that is almost impossible for humans to do.
The arduous task of making huge repositories of data meaningful now sits with automated systems, which can do so at a speed previously thought unknown. The result is technology which can predict our behaviour and makes decisions on our behalf. Rather than fearing the popular narrative of AI robots, should we instead be afraid of these intelligent virtual systems which learn from, and subsequently inform, our behaviour?
The topic of data management and AI arose at our 2017 Huxley Summit, which brought together business leaders, scientists and policy-makers to discuss ‘science and innovation in a post-truth world.’ Amongst the panel there was agreement that there must be more transparency on how citizens’ data is used, and that data ethics is a cloudy area which lacks regulatory framework. There’s ambiguity over who ‘owns’ data, so it’s hard to apply consistent standards to it.
There is a clear acknowledgement of this, now enshrined in law, as the new General Data Protection Regulation (GDPR) comes into force in May 2018. The legislation intends to strengthen and unify data protection across the EU. In short, users must be made aware of exactly what their data will be used for and must give consent to its usage. Those annoying daily emails from unknown websites you signed up for ten years ago will cease to be. This is a stepping stone, and reflects a larger concern that data ethics and regulation must come to the fore. However, is it enough? Many people – including companies — have voiced concern that the new legislation is too little, too late and too complex to be understood by the people it serves to protect.
One of the key speakers at the Huxley Summit, Chi Onwurah, Shadow Minister for Industrial Strategy, Science & Innovation, presented the view that people are merely sources of the data which is then used to control and coerce them. Speaking at the Summit she said: “Citizens and consumers should have control and ownership of their own data. This shouldn’t mean they’re carrying it around on USB drives. We don’t have the technical and regulatory infrastructure in place to make it possible, but there are third parties looking at ways consumers can store data, that they choose to have, in certain places and have complete control of it.”
Logistically, most people wouldn’t be able to manage their own data. However, we should own it and control where and how it’s used. The panel at the Summit repeatedly spoke of the importance of creating a regulatory framework, which flagged up the current absence of such structures.
It’s important to note that data brings enormous opportunities. When used effectively and teamed with machine learning and AI, it can yield endless possibilities for improvement, from healthcare to driverless cars. At the Summit, Kenneth Cukier, Senior Editor for Data and Digital at the Economist cited research from Harvard and Stanford which wanted to test if a machine learning algorithm could be better than a pathologist at diagnosing cancer. Three of the things that AI spotted, humans didn’t even know to look for.
You got the power to let power go
So, the question remains: what happens to our data in the future? This conversation has picked up considerable pace in 2017 and will continue for years to come. Under the new GDPR legislation, companies will have to be more transparent in how user data is used for marketing purposes, but will this result in people ticking off long and unintelligible T&Cs? And will companies actually observe the new laws? Finally, does there need to be a wider education in how we give away our data and what we can do to avoid it being misused? The value of data and its benefits are widely known but with data comes power, and in the immortal words of Kanye “no one man should have all that power”.