Every year, the world’s digital transformation efforts bring us better, more effective AI tools. They’re capable of wonderful, increasingly complex tasks, and they can be game-changing for organisations.

But, AI has also caused great controversy, and the more we use it, the more controversy it seems to bring.

Using AI without thinking about its impact could be a risk. In fact, you might not even realise that your use of AI is unethical – and here’s why.

Why is it important to apply an ethical lens to AI?

Let’s consider the context around artificial intelligence, as this will provide scope as to why its ethics are becoming so increasingly talked about.

  1. The AI industry is expected to reach US$228.3 billion by 2026 (Global Industry Analysts).
  2. AI is now in use by the majority of businesses in either a full or trial capacity – 57% of companies, to be exact, which is up from 44% just three years ago (Boston Consulting Group/MIT Sloan).
  3. The Australian government has invested AU$124.1 million into the technology to help the nation’s businesses develop and adopt more AI (Australian Government).

More AI can mean more risks

The thing is, while AI is growing in scope and intelligence, it’s not smart enough to make its own decisions about what is right and wrong. It must be programmed with a predefined set of ideas, notions and morals. Without this conditioning, a lot of tasks are too complex for our limited AI abilities.

This is where some of the ethical risk starts to take shape.

  • Example: Let’s say you program software to find pictures of cute cats. This should be a simple task, and for a human it would be easy. But, who defines ‘cute’? The answer is different for everyone. This may be a simplistic example, but it shows that someone at some point has to program AI with their own bias, because otherwise the algorithm simply couldn’t define ‘cute’. Because of this, there’s no such thing as a neutral AI – it always has a bias.

Unbiased AI causes trouble

With AI infiltrating so much of our daily personal and professional lives, unbiased AI becomes a problem.

In a perfect world, these algorithms and robots would be programmed in such a way as to be as fair as possible to as many people as possible. But we know they’re not, which means just like how we humans sometimes marginalise and judge certain groups, our AI copies us.

This has already happened. AI algorithms in the US justice system that autonomously determine a person’s criminal reoffending risk are more likely to be harsh on black Americans and less harsh on white Americans – even if those white Americans are seasoned criminals.

While some groups are taking action to produce better, more ethical AI, there’s a lot of work yet to be done and that means businesses must choose carefully when adopting AI.

Examples of how AI could be used unethically

Given that most of us aren’t in the business of criminal reoffending risk, what are more common instances of potential unethical use of AI?

Decisions making purposes are the big one: Decisions made by a computer can quickly become unethical. Think financial decisions, hiring and firing of staff, or the big one – life or death (i.e. how does an autonomous car handle the famous trolley problem?).

Data collection and surveillance/tracking: The more we know about our customers, theoretically the better we can serve them. But, the more data we collect on customers, the more intrusive it gets. Every company that deploys any kind of big data must stop and think to itself: At what point does the collection and use of data breach our company values?

How to consciously apply ethics to AI

  1. Write new policies: Develop a company AI ethics policy that governs how your company will and won’t use AI, based on your values and mission. Revisit this annually, as technology will of course keep changing.
  2. Hire a chief AI ethics officer: Employ an AI ethics officer at the C-level of the business. This is someone who will study the regulations, the debates, and help guide the company through its use of ethical AI and contribute their expertise to decisions about the company’s future AI investments. This could be a role on its own, or a task folded into someone else’s role.
  3. Build human oversight: There should always be humans overseeing an AI’s performance and checking its quality of output, as well as human assessment over any processes that govern the uses of AI in the business. This is especially true for any task where empathy is required.
  4. Sweep your company for bias: Even with all of the above you may develop or buy an algorithm that comes with bias. Your technology teams can use a couple of tools to help sweep AI for bias, which you can read more about on VentureBeat and in this research paper.
  5. Put the right blocks in place before investing in AI: These blocks consist of people, processes and strategy. With all three thoughtfully considered and updated to include how to use AI responsibly, it can help ensure that all software purchases align to the business, its missions and values, and that it will be utilised correctly by staff.

Need help digitally transforming with humans in mind? We’re here for you

At Ko-Lab8 we’re experts in digital transformation that focuses on human enablement and good ethics. We understand the power of technology, but know that it needs smart people practices behind it to be a force for good in the modern world.

To learn more about how we might be able to help your organisation, contact us today.