Skip to content

South Africa: POPIA considerations in the age of ChatGPT

12 June 2023
– 6 Minute Read

DOWNLOAD ARTICLE

South Africa: POPIA considerations in the age of ChatGPT

12 June 2023
- 6 Minute Read

DOWNLOAD ARTICLE

Overview

  • Data protection has become a major concern for businesses of all sizes in South Africa. With the use of artificial intelligence (AI) platforms such as ChatGPT becoming increasingly widespread, organisations need to be aware that this innovation comes with the responsibility of ensuring compliance with the Protection of Personal Information Act (POPIA).
  • Although not specifically regulated in South Africa, the use of ChatGPT has legal implications for the way that personal information is processed.

The use of Artificial Intelligence (AI) by businesses and individuals is fast gaining traction, with new platforms such as ChatGPT emerging, each seeming to be an improvement on those already in the market. From a legal perspective, legislation generally does not evolve rapidly enough to keep up with technological changes, which are occurring with ever-increasing pace and scope. This raises important questions about the implications of the use of AI, particularly ChatGPT, for data protection.

There are currently no regulations dealing specifically with the use of ChatGPT in South Africa, whether in a data protection context or otherwise. The Protection of Personal Information Act 4 of 2013 (POPIA) only regulates the processing of personal information using automated means and does not address the full extent of the capabilities of AI systems such as ChatGPT.

ChatGPT is a platform that uses a set of techniques referred to as deep learning. It utilises enormous amounts of data to train an AI system to perform a task. ChatGPT was trained using text databases from the internet. This included 570GB of data obtained from books, Wikipedia, articles, and other sources on the internet. ChatGPT performs a wide range of activities, such as sourcing and preparing data collected from the internet, receiving human feedback, and answering questions, all with the goal of having machines and systems perform tasks that have historically been performed by people. It has been praised for its continuous learning capabilities and develops its understanding of questions based on inputs submitted by users and the outputs generated for them, some of which include personal information.

The Information Regulator (South Africa’s data protection authority) has held internal discussions on the regulation of AI technologies to ensure that the personal information of data subjects is not compromised. The chairperson of the Information Regulator, Advocate Pansy Tlakula, highlighted in an interview with ITWeb that there are growing concerns about the use of AI platforms. However, she has acknowledged that the Information Regulator still needs to fully appreciate the technical issues surrounding the use of AI platforms such as ChatGPT. Advocate Tlakula has highlighted that because the use of AI software platforms is uncharted territory in South Africa, technical information would have to be collected prior to the introduction of guidelines, or the development of a framework.

Examples of the possible guidelines that the Information Regulator could consider implementing include requiring responsible parties (the businesses that decide the means and basis for which data subject personal information is processed) to develop and maintain internal AI policies that regulate the types of processes that businesses can use AI for; placing a limitation on the types of information that businesses and their employees can upload onto AI platforms such as ChatGPT; and requiring responsible parties to inform data subjects when their identifiable personal information has been uploaded onto AI platforms (similar to the notification requirements for security breaches).

Even with the research that the Information Regulator has to undertake in relation to how AI platforms operate, there are still concerns about whether regulations would be sufficient to deal with the risks posed by AI software platforms for the protection of personal information. The question also arises whether such regulations will be aligned with continuing technological innovation, which could render such regulations inadequate, if not obsolete, almost as soon as they are introduced.

ChatGPT and the rights of data subjects

The rights of data subjects remain enforceable even in the age of AI. Responsible parties have the same general obligations under POPIA when they elect to use ChatGPT in the ordinary running of their businesses. Responsible parties have a duty to safeguard the personal information that they process, to adhere to the conditions for the lawful processing of information under POPIA, and to provide data subjects with access to their personal information. It is advisable that the use of ChatGPT by all employees is regulated through internal AI usage policies to ensure compliance with POPIA.

Data subjects have the right to be notified that their personal information is being collected, and have the right to request the correction, destruction, or deletion of their personal information. POPIA also places a limitation on the further processing of personal information belonging to data subjects by responsible parties.

Organisations / businesses are the responsible parties in relation to their customers’ and employees’ (the data subjects) personal information. Organisations therefore have the obligation to ensure that the processing of personal information does not in any way limit the rights of data subjects under POPIA.

When the employees in an organisation utilise AI platforms in the performance of their tasks and the personal information belonging to data subjects is either deliberately or inadvertently loaded onto an AI platform, data subjects are generally unaware that their personal information has been collected and (further) processed by the platform. This deprives data subjects of the ability to exercise their right to give consent to the processing of their personal information.

In the event that data subjects become aware of the processing of their personal information on ChatGPT and elect to exercise their right to have their personal information deleted from the platform, they will struggle to do so because any personal information uploaded onto ChatGPT is held by Open AI (ChatGPT’s parent company), which is based in the United States of America. The United States does not have data protection legislation that is similar to POPIA, and it does not offer a similar level of protection to personal information.

Liability under POPIA

Should a responsible party fail to adhere to the provisions of POPIA and fail to ensure the proper handling of personal information, the Information Regulator may, through conducting its own investigation or acting upon a complaint received from a data subject, issue an enforcement notice to the responsible party. Failure to comply with an enforcement notice is a criminal offence. On conviction, the responsible party would be liable for a fine. The Information Regulator can impose an administrative fine of up to ZAR 10 million.

Conclusion

It is recommended that businesses ensure that personal data is collected, processed, and stored in compliance with POPIA and its regulations, even with the use of ChatGPT. This can be achieved through the regular monitoring and auditing of the use of personal data within organisations to ensure that it is being used for legitimate purposes and in accordance with any consent or permissions that have been granted.

Internal frameworks should be developed within businesses to regulate the use of ChatGPT, coupled with regular assessments to monitor the potential risks and impacts on employees and clients. The AI framework policy should be put into practice by ensuring that all employees with access to and use of ChatGPT in the performance of their duties are trained on the use of the AI system and the policy on a regular basis and in line with any updates on the system. Employees should also be alerted to the importance of only using de-identified personal information to ensure that client and employee personal information is not unlawfully processed.