Artificial intelligence (AI) adoption has increased in recent years. In 2018, companies actively using AI only devoted 5% of their digital budgets to it. Fast forward to 2023 and that number jumped to 52%, as reported by Vention. It is clear that the technology is evolving rapidly. However, within that context, many have expressed ethical concerns about AI.
“This question is misplaced. Arguing AI’s inherent ethicality can be compared to arguing for the morality of a calculator. AI is merely a tool, and the true question is whether the humans who build or use it have nefarious intentions. The ethicality of AI is, therefore, an extension of the ethics of its creators,” says Jonah Kollenberg, senior AI engineer at 4C Predictions.
For its part, 4C Predictions, an innovative AI-driven sports predictions platform, ensures ethical principles are integrated into the design process from the outset.
“We build our systems with transparency and accountability. This is not only the right thing to do but it is good for business. Our customers need to trust that our predictions are accurate and based on reliable data. Building dishonest systems would only undermine that trust – something no company that intends to last would risk,” says Kollenberg.
The most common cause for concern about AI is models trained on artists’ images or journalists writing without the permission of original authors. These breaches of intellectual property rights have seen many proponents of AI implement internal processes to ensure the data is sourced ethically and that model makers are credited and compensated. Similarly, large language models (LLMs) have been implicated in the spread of misinformation and disinformation.
A legislative approach
Vignesh Iyer, Senior AI Engineer at 4C Predictions, believes legislation is crucial for ensuring the ethical development of AI.
“Key legislative areas should include defining AI-related terms, establishing principles like fairness and privacy and ensuring sector-specific regulations in fields such as healthcare and finance. Enforcement would rely on regulatory bodies, regular audits, penalties for non-compliance and protection for whistleblowers,” says Iyer.
While some companies disclose their AI practices, especially in critical areas like hiring or credit scoring, others operate with less transparency. This lack of openness can erode trust with stakeholders. As AI becomes more integrated into business models, transparent use and strong ethical governance are no longer just regulatory concerns – they are key to long-term success.
A force for good
“AI is not only a powerful tool but also one that can be used for positive change. Already, AI has been used to improve lives and contribute to the common good. For example, AI chatbots and virtual therapists are used to offer mental health support, providing therapy guidance to individuals who may otherwise lack access to professional help,” says Iyer.
“Ethics must be considered from the beginning of a project with governance measures in place throughout its lifecycle. AI will inevitably continue to shape the world but its long-term success depends on building and maintaining systems rooted in ethical principles,” states Iyer.
4C Predictions is an AI-driven sports predictions platform, dedicated to providing accurate and reliable sports forecasts.