Connecting, is one of my favourite case studies on the importance of interaction design in an increasingly connected digital age. Produced by Microsoft Design, it features thought leaders from a number of organizations including Twitter, Method, Nokia, and Arduino among others. It is now more than 5 years old, but the content is as relevant today as it was when it was created, and that's saying something considering how fast technology changes.
When I first watched Connecting, I found it truly inspiring and I was excited about the future of technology that was presented. It opened my eyes to the potential we have to use technology to improve the health and well-being of people around the world. But five years on, I find myself viewing the same narrative through a slightly different lens - one that is a bit more wary of data collection and artificial intelligence.
In the wake of the Cambridge Analytica scandal, I am certainly more careful about the information I choose to share and I am less likely to take anything I read at face value. I love the convenience of storing everything in the cloud so that I can access all of my files from any device, anytime, anywhere. But at the same time, I do increasingly wonder what will be done with all the data that is being stored online and what implications it will have down the road. At this point in time, there is no single gatekeeper ensuring that all data is stored and shared in an ethical way. Luckily, governments are finally catching on to this fact and are beginning to mandate that every citizen has a right to know what personal information of theirs is being stored and, more importantly, has the right to have it deleted. The EU is certainly ahead of the rest of world on this front with the new General Data Protection Regulation (GDPR); and while this is a major step in the right direction, we must acknowledge that it will still be extraordinarily difficult to have information deleted once it has been shared.
What is perhaps more concerning, is that much of the personal data that is being exploited has more to do with our preferences and habits than the actual content of what we are sharing. This information is being fed into artificial intelligence programs to make suggestions on everything from what we should buy, to who we should include on email threads, to where we should go and what we should read. I must admit that I have a bit of a love/hate relationship with this advancement as I genuinely do find some of these suggestions helpful. But, it is a slippery slope once we'd rather trust a bot's judgement than spend a few minutes doing our own research; and we need to be mindful of the ethical implications. Personally, I have noticed that increasingly the "news" that is being presented to me on my mobile device is a very small subset of what is actually newsworthy, and is largely based on topics I have shown interest in previously. With my news feed feeling more and more like an echo chamber, I am now actively trying to work around the built-in automation on my device to ensure I am accessing a more balanced worldview. Considering that most people would rather read what's being served to them rather than do the legwork of finding other news, this should be concerning to all of us.
To be clear, I'm not against technology or the individuals creating it, after all, I am one of them. I think that generally speaking, the people in this field have good intentions and believe that their work will be helpful to the end-user. That being said, the majority of software product teams are tasked with executing a plan not with considering the social/ethical implications of how the end product will be used. Mark Zuckerberg certainly never expected that by creating a platform to connect friends online his software would sway the outcome of a presidential election. But it did, and there will be many more examples like this as artificial intelligence plays a bigger role in our lives. Perhaps Younghee Jung said it best in Connecting: "You cannot necessarily foresee the consequences when people adopt what you have designed." And is precisely for that reason that those of us who are in the field of creating and distributing technology have a responsibility to consider what could go wrong, to ask questions, to play devil's advocate and to push back when we have concerns.
I feel that this is the great conundrum of our time. Technology is here to stay, of that there is no question. So how do we create technology that serves to improve and enrich the lives of people around the world, while ensuring that there are safeguards in place to protect us from ourselves?
I'd love to hear your thoughts on this topic. Please feel free to share in the comments below, and let's keep this conversation open-minded and respectful.