The latest PWC insight shows that a large number of CEOs have embraced the efficiencies that generative AI can bring to their business, with 51% of respondents saying they had already integrated AI into business operations. However, software development specialist, Global Kinetic, has warned that IT leaders should still exercise some caution, especially as global regulators move towards consensus.
The latest 27th PWC Annual Global CEO survey shows that 70% of the 4702 respondents expect generative AI (genAI) to significantly change the way their company creates value over the next three years. However, their enthusiasm was tempered in the longer term with concerns around the need to upskill their workforce, deal with security, as well as a considerable 52% saying they had concerns about legal liabilities and reputational risk.
Dewald Mienie
“Software leaders are experimenting with AI, but it’s abundantly clear that not all genAI is ready to take centre stage just yet. As more companies begin to roll out the technology into their daily operations there is still room for some serious missteps, especially for software dev teams - both in-house and contractors. This becomes more complicated when operating in an opaque regulatory environment,” warns Dewald Mienie, Head of Architecture and Technology at Global Kinetic.
Regulators are playing catchup
As with most new tech regulations, the EU is at the vanguard when it comes to AI regulations. The Union proposed the AI Act in 2021 to protect citizens and ensure better conditions for the development and use of AI technologies.
While 31 countries have already passed individual AI laws and a further 13 are debating them, South Africa has yet to pass any. The country does however have solid IP law and robust privacy laws, and in November 2022, the Department of Communication and Digital Technologies established the AI Institute of SA, which was founded on the vision set out by the Presidential Commission on the Fourth Industrial Revolution (PC4IR).
However, with the majority of Southern Africa’s CEOs (71% according to KPMG's 2023 Southern Africa CEO outlook), saying genAI was a top investment priority, Mienie says the country’s tech leaders must proceed with caution.
Can you trust the source?
When it comes to software development, AI has already made itself incredibly useful. Many software leaders are leaning into the efficiencies that can be easily and relatively cheaply gained by using AI to write simple code. However, Mienie says the lure of this cheap labour can trip some up.
“While we all have to worry about protecting our IP and where it is being stored and with whom it is being shared, we often forget to question the source of the data that the machine has learned from as it now starts producing our code. Companies must be able to produce a set of code that clearly isn’t infringing on their competitors and this can be very difficult if developers aren’t sure from where the genAI is sourcing its inspiration,” Mienie says. He goes on to explain that the AI system can also create a host of security challenges. Mienie says when a software developer generates code with chatGPT, the programme will also tell you what libraries it uses. But since ChatGPT also infers certain things, it can, for example, make use of or generate a library that does not exist. This means bad actors simply have to search for these fake libraries, taking advantage of any number of attack vectors that can be implemented, as they get pulled into a company's system.
Take a risk-based approach
Mienie adds that companies should take a risk-based approach to privacy when using programmes such as Copilot which plug into a company’s Integrated Development Environment (IDE). While there may be AI privacy guarantees, he says a development company or department can never be certain how much of their code is being stored on remote servers.
“While there is no reason to assume that AI companies are maliciously holding onto your data, there is an expectation that software leaders will do all that is necessary to protect their unique IP. This is especially the case for smaller companies who would be hard pressed to go up against any of the big AI companies to prove in court that their code was being unfairly used,” Mienie says.
Mienie also points out that Copyright Shield, which gives enterprise clients employing OpenAI’s software for various tasks legal cover in the case of a copyright infringement case, could actually work against smaller companies when it comes to protecting their work.
Given the lack of consistent global regulatory clarity, Mienie suggests that software leaders are transparent with clients, declaring that they are using AI tooling for efficiency gains, and acknowledging that there may be a small risk that IP could be leaked. The difference between on-premises AI tooling and cloud-based tooling should also be pointed out. What’s more, by including an AI opt out clause in the client contract, software developers are protecting themselves and being transparent with clients.
Despite the risks of genAI, Mienie says companies don’t have the luxury of waiting to see how the technology matures.
“The best approach is to identify low-risk products that can quickly and easily help improve your productivity. Once there is more security around regulations you can roll it out more broadly within your organisation. For now, the most important lesson for software developers is that they should not assume genAI will always act with their best interests at heart and they should ensure both they, and their clients, are properly informed and protected,” he says.