Data: Facts and statistics collated and prepped for analysis. A simple concept that’s become one of the most lucrative assets at our disposal. By leveraging this information through artificial intelligence (AI) and machine learning (ML), it presents us within a plethora of possibility and opportunity for what we can achieve with this – in the name of empowerment or manipulation however, this is up for debate.

At the current rate of technological development, there’s an endless game of cat and mouse being played between legislation in the name of public interest and the tech giants that harvest our information for profit and power.

It’s companies like Google which have entered our lives as big, new, shiny altruistic organisations, providing high quality services at little to no cost and as a result have ingrained themselves into our day to day lives. However, there’s an underlying cost to all of this, if there wasn’t, the model simply wouldn’t be viable as a business. As the saying goes; if you’re not paying for the product then you are the product.

Google is a pioneer of the big data model, having figured out the most efficient way to monetise the information we’re so happy to throw at them. Every move that we make on our phones, search bars, and social media is tracked and stored in computer systems and servers stored in discrete high security locations. As alarming as this sounds, no one is actually monitoring any of this information, so I don’t think you need to be worried about what your searching in your incognito tab when no one’s looking. Although, the reality of this is much more sinister.

The trouble with models such as Google and Facebook, is that the ML and AI used to manipulate all of this information is only designed based on three key objectives: keeping us clicking, keeping us sharing and keeping us buying. None of which looks out for our wellbeing, the user or society as a whole.

This is a system that’s constantly evolving to be more sneaky, manipulative and deceptive, and is constantly tweaking the means of these three factors (through the content we’re being shown) to get better and better at achieving these objectives, regardless of the impact it may be having on us.

Now when you think about a computer geek sitting behind a screen watching what you do and analysing your behaviour, having him watch us doesn’t seem as bad. At least there’s a shred of humanity that plays into their process in this circumstance. In the words of the great Mark Zuckerberg himself: ‘What’s good for the world isn’t necessarily what’s good for Facebook’.

So, what’s the actual product of all of this content manipulation? Well, it shapes how we perceive reality. Twitter, Facebook, Instagram and Google have, for a large portion of us – Gen Z especially – become the main portal into current affairs and a source of news. So, if the information we’re consuming is all fed to us based on what we want to see, or at least what we want to see from the perspective of an inhuman algorithm, our vision of the world outside of our immediate vicinity is being skewed and distorted.

We’ve already begun to see the affects off all of this in the increasing divide between the left and right when it comes to politics. Each side has their own arguments which come from their own sources, all curated by personalised algorithms. This means that each side live in their own bubbles of reality, where their information isn’t necessarily wrong, but biased and blind to content that could provide a counter argument to help better shape an opinion.

With the societal argument aside, there’s a lot to be said about how this affects us on an individual level. The content that’s always more attention grabbing is that of a controversial nature, that provides shock value – things that are of a negative nature. This frames the world in a much more negative light which can ultimately have an impact on our mental health. This is in addition to all of the aspirational representations of people’s lives that we see and continue to distort our view of the world.

Despite all this, it isn’t all doom and gloom – there’s so much potential for good through the use of data, social media, ML and AI. There are also ways we can protect ourselves from the manipulation and deception that has come from it. One of these ways is through a concept called a MID (Mediator of Individual Data). Formulated by tech guru Jaron Lanier, this concept works much like a trade union. You join one which suits your needs and they look after who uses your data and how it’s being used. Companies like Facebook will then have to come to the MIDs to buy the data they want, and the profits from this are then fed back to you in a number of ways: dividends, single pay outs or distributes. Sounds good, right?

If this concept doesn’t work, we should at least have the ability to see where and how our data is being used. Not only does this make the use of data more open, but it makes data more accessible for other, smaller organisations, potentially with a more socially positive agenda.

My point is that our data has a great deal of value and is something that should belong to us. If full control isn’t achievable, then full visibility and transparency should be, so that when we engage with the services that our lives now depend on, we can do so in full knowledge of the risks and rewards that come with it.