Interest in digital ethics has gradually grown in the last few decades, but accelerated in recent years with the introduction of Australia’s Notifiable Data Breach Scheme, GDPR and other similar legislations around the world.
While ample regulatory attention is given to the responsible use of data, artificial intelligence (AI) and other digital technologies, this is both a blessing and a curse.
It’s a blessing because it moves digital ethics from being something voluntary and considered only by visionary organisations, to being a mandatory topic for all. On the other hand, it’s a curse because ethics is about so much more than regulatory compliance. It runs the risk of becoming a tactical topic to clear, instead of an opportunity to create value.
Data and analytics leaders must move digital ethics beyond reactive checklist exercises toward proactive ethics by design for innovations. Protecting privacy, deploying responsible artificial intelligence (AI) and ethically using other emerging technologies are key success factors for business and societal value creation.
Move beyond compliance-only ethics
Gartner’s discussions with clients reveal that attention toward digital ethics remains mainly reactive and compliance driven. The most interest originates from heavily regulated sectors such as banking, insurance and government.
Digital ethics has also expanded from topics such as social media and big data, to AI and machine learning. The need to be more proactive raises the question of which newer technological innovation trends digital ethicists should anticipate and prepare for.
Obviously, there’s nothing wrong with being compliant with laws, regulations and corporate policies. After all, depending on the specific legislation, non-compliance may result in significant fines, brand damage, personal accountability and other risks, not to mention its misalignment with corporate social responsibility.
According to Gartner, 35 per cent of the top 50 technology firms by revenue will face a consumer liability claim by 2021 due to safety and privacy lapses in their digital products or services.
Limiting efforts to only compliance, however, is a missed opportunity. Instead of doing only the bare minimum in a superficial, reactive approach, be progressive and make digital ethics an intrinsic part of your corporate culture. This will help you become effective in proactively applying digital ethics to adopt and absorb innovations more quickly.
By making digital ethics part of your business DNA, you’ll gain the awareness and practical experience to use new technologies in a way that creates the most value for customers and other stakeholders. In addition, addressing digital ethics early on in the innovation process by design helps avoid lingering discussions on what is and isn’t allowed.
Lead digital transformation with digital ethics
Ensure that digital ethics is leading and not following digital transformation by continuously monitoring new digital trends, such as augmentation and autonomous systems. Anticipate and enable these new trends through new ethical guidelines and tools, minimising unintentional and undesired consequences of new digital applications.
Digital ethics will continue to evolve as the next waves of digital technologies become mainstream.
Wave 1 — Social media behaviour
This comprises awareness and concern about the impact of the internet, and in particular social media, on human communication, social relations and emerging group behaviour.
Social media increasingly plays a key role in public opinion, elections and even revolutions. In addition, fake news, misinformation, bots and viral events all illustrate the need for a digital ethics perspective.
Wave 2 — Big data and privacy
With companies and governments increasingly able to collect huge amounts of data about customers and citizens, the protection of privacy has become an ever more important topic in most parts of the world. In addition, privacy has become a hot topic during COVID-19 in the context of using mobile phone location data for contact tracing.
Wave 3 — Responsible AI
Automating decisions on the basis of machine learning amplifies risks such as bias or discrimination and unfairness. Diversity and representativeness of data, as well as the explainability of “black box” models such as deep learning networks, are of key concern.
Over the last two years, the discussion of digital ethics has centred around AI, increasing the stakes yet again. The societal debate around the ethics of AI and big data versus privacy is paying off in terms of increased awareness about the importance of digital ethics among people, companies and governments, particularly since COVID-19 began.
It’s important to keep momentum. Don’t think that it’s done once the current discussions lead to a first good implementation of digital ethics around your AI initiatives. New waves of digital technologies are on the horizon that have significant ethical impact.
Wave 4 — Impact of augmentation
This is where the physical and virtual worlds blend. Augmentation has a significant impact on people’s identity, as well as people’s grasp on reality and truth.
The many virtual meetings over the past 12 months, for example, are likely to have significant social and organisational impacts, including changing the way people perceive and identify with their corporate culture or communicate and collaborate.
Wave 5 — Autonomous system risks
Technologies have become actors in society, making autonomous decisions for or on behalf of people, which affects the environment. There are ethical questions around how to recognise that agency, and how autonomous systems affect free will and freedom of people.
The challenge for digital ethics is to stay ahead of the curve — maximising the positive impacts and minimising the negative ones in future digital innovations. Ultimately and ideally, ethics and the responsible use of an innovation should be an integral part of the development – “ethics by design.”