Ensuring cybersecurity when using AI

2023-08-15

The increased usage, availability, and popularity of AI tools, such as ChatGPT, Dall-E, Bard, etc., has stimulated great interest and has created many opportunities for academic utilization. In addition to these opportunities, AI also presents significant cyber threats and risks that need to be considered when using these AI tools.

Among the major risks is the sharing of sensitive information, including personal data, which must not leave the CU environment and should not be publicly accessible.

These guidelines apply to all Large Language Models (LLMs) and AI chatbots, including ChatGPT, GPT-n/x, Bard, LLaMa, BLOOM, etc.

Users should be aware that all queries/challenges posed in an AI tool are visible to the tool operators outside of CU. Employees should be familiar with and adhere to basic rules when using AI tools, and thus from a cybersecurity standpoint, the following guidelines should be followed:

This text was taken from the document Artificial intelligence (AI) Recommendations for the academic members of staff of Charles University