The use of text and conversation tools like ChatGPT and similar services has become widespread since their launch in 2022. Tools based on generative artificial intelligence (AI) bring many opportunities but also some challenges.
As a student, you must ensure you know the rules that apply to your subject. Each faculty and department may have its own guidelines for the use of AI-based tools for different subjects. In general, good citation practices apply, including for the use of digital tools, including AI-based tools.
What do we mean by AI-based tools?
Generative Artificial Intelligenc (AI)
This is a general term for tools that can generate text, images, sound, videos, etc. Generative AI can also create useful structures such as proteins, enzymes, building constructions, and solution proposals, thereby contributing to scientific advancements. On this page, we only refer to text and conversation tools.
ChatGPT is an example of a tool that generates text based on user input, and tools like Microsoft Copilot and Google Gemini work in a similar way.
Training the model
To create content, the AI tool must be trained by learning from data. The model on which the tool is based finds patterns in the data that allow it to provide good overviews and summaries, and often create content that appears to be new.
Reliability
It is important to be aware that these tools are not reliable sources of information. They generate answers based on probabilities and do not function like a regular search engine. They can misconnect information that is not initially related, resulting in so-called "hallucinations."
What potential do AI tools have?
Methods and tools based on artificial intelligence bring opportunities for significant scientific advancements, especially in natural sciences, technology, and medicine. Examples of such advancements are continuously showcased and discussed in the UiB AI seminar series, which are open to all students. Text and conversation tools were first discussed at UiB AI's seminar on ChatGPT's role in research and education.
Which AI-based tools can I use?
- All students have access to Microsoft Copilot for web as part of UiB’s Microsoft package. Microsoft’s conversational uses Open AI's GPT 4-model. This is the same model used in the paid version of ChatGPT, allowing you to use both text and images as input, or “prompts,” when using the service.
- Because UiB has a data processing agreement with Microsoft, you can use their AI-based tools safely. However, the solution is not approved for confidential or strictly confidential information.
- Read more about Microsoft Copilot for web (formerly Bing Chat Enterprise) in the news article Trygg ChatGPT-tjeneste for alle (Norwegian only).
What distinguishes UiBchat from ChatGPT?
- Data is not stored anywhere other than on your own computer when you use UiBChat.
- Chat logs are stored in the browser you use and are not accessible from other devices.
- When using UiBchat, you can be confident that what you write to the chatbot will not be shared or used to further train the model.
- Data you enter into the chat is processed in Microsoft’s data center in Stockholm but is not stored by Microsoft after the GPT response is delivered. This is in accordance with the data processing agreement.
Please note that the challenges related to how much you can trust the tool and all questions about academic guidelines, etc. apply regardless of which tool you use, including those from UiB. UiBChat is only for use with green data.
For more information on how to use UiBchat, see the user documentation.
Pitfalls of using text and conversation tools based on AI
You never know what data the model is trained on
Any generative AI model that a text and conversation tool is based on must be trained on a set of data. The data used during training can greatly influence the results we get, and this is something we must be aware of when receiving a response from the generative model.
Just as our opinions and attitudes can be shaped by the information we have, generative models will produce content in line with the data they are trained on. As a user of these services, you can never be sure what data has been used in the training, unless the services have made this available in a way that can be reviewed.
Although AI can provide us with useful tools, knowledge and expertise in the relevant field are absolutely necessary to assess the reliability of the content these tools produce.
For more information on how data can and has influenced artificial intelligence to make errors, see for example the Wall Street Journal article "Rise of AI Puts Spotlight on Bias in Algorithms".
The model is a result of the data it is trained on
Even if we know what data a model is trained on, it does not necessarily mean that it produces sensible results. Simply put, we can say that a model that uses high-quality data will most often produce high-quality content. Similarly, a model trained on low-quality data will most often produce low-quality content.
As users of these services, we do not know the quality of the data or how it has been processed. If we blindly trust the output of a generative model, we risk using unreliable answers. For example, if you ask ChatGPT who was the rector at the University of Bergen in 1955, it will tell you that it was Asbjørn Øverås, when in fact it was Erik Waaler. This is an example of factual errors, which you can also read more about in Professor Jill Walker Rettberg's article on NRK.
You do not have control over the data you submit
By using generative models on the internet, you send information to servers whose locations you may not know. The EU has very strict regulations governing what companies can and cannot do with your information, but for online services, there is no guarantee that they operate within the EU/EEA in accordance with these regulations.
Unless the company providing the model you use has committed to handling your data legally and ethically, you risk that the data may be used to train other models or may be misused for various reasons. Therefore, it is very important to be aware of what data we send, similar to most other internet services.
For example, student work such as exam answers are not allowed to be sent to generative models, as exam answers are considered personal data. In all cases where the University of Bergen (UiB) sends personal data to a third party, a data processing agreement must be in place.
Language models have a limited scope
Digitalization, technology, and artificial intelligence are transforming our fields and the workplace across industries. Artificial intelligence, in forms other than generative models, is being used in many contexts. In the future, the combination of specialized expertise and digital understanding, including knowledge of artificial intelligence, will be crucial in both work and social life.
The emergence of new and accessible tools has different implications for various fields and disciplines, and different faculties and departments may therefore have different approaches to their use. Regardless of this, there are legal frameworks, regulations for plagiarism and cheating, and for citation that must be respected.
Legal frameworks
AI tools and GDPR
One of the challenges with using AI tools like ChatGPT and the General Data Protection Regulation (GDPR) is that the data used to train these tools contains personal information. Personal information is defined as any information that can be linked, directly or indirectly, to an individual. This personal information is collected into the tool from various sources, both from the internet and from users, with or without their knowledge.
It is very uncertain whether this personal information has been collected legally, and data protection authorities from several countries have been skeptical of such services for this reason. Under GDPR, it is very difficult to consider that such collection of personal information, known as data scraping, can be legal. This is especially true for sensitive personal information such as information about race, ethnicity, religion, health, political opinions, or sexual orientation.
Users must be aware that solutions based on artificial intelligence use what is written or uploaded to improve the service, and that they are shared with third parties such as commercial entities and authorities. This means that personal information entered is stored and used further. It is not possible to correct or delete them from the model. Be aware that requests submitted to the service may contain sensitive personal information and that the model can infer sensitive information about you from the provided information.
Plagiarism and cheating
Academic integrity is an overarching norm that governs what is expected of, among others, UiB students. We expect students to trust their own skills, make independent assessments, and take a stand. Text and conversation tools based on AI and other digital tools raise questions related to both source use and the requirement for independence. As a general rule, the answer to an exam must be an independently produced text.
This means that submitting a text that is wholly or partially automatically generated will be considered cheating, unless it is properly cited.
Read more on the website Academic integrity and Cheating.
On cheating and integrity in the Ph.D. education
Academic integrity is an overarching norm that governs what is expected of UiB's Ph.D. candidates as well. For Ph.D. candidates, the Ph.D. program consists of two parts: the training component and the dissertation work. Exams in the training component are subject to the same legal frameworks, plagiarism and cheating regulations, and citation requirements as students. The dissertation work is research work that is subject to the same frameworks for scientific integrity as all research at UiB.
For more information on scientific integrity, see Research ethics.
Referencing
When software influences results, analyses, or findings in academic work, you should reference it. AI tools are considered software and not co-authors because the software cannot take responsibility for the work.
AI-generated text cannot be reproduced by others. Therefore, in academic work, it should be stated how the AI tool was used, including the time and extent, and how the result was included in the text. It is often useful to show what was entered into the chat. Long responses from the AI tool can be included as an appendix. Be aware that there may be subject-specific rules for documenting AI use.
Example of a reference in APA7-stil:
- In the text: text (OpenAI, 2025)
- In the reference list: OpenAI. (2025). ChatGPT (April 20 version) [Large language model]. https://chat.openai.com/.
DIGI-courses
For those who want more digital understanding, knowledge, and competence, we offer the DIGI course package. It consists of nine courses, each worth 2.5 credits, and the courses cover various skills and topics within digital technologies. The courses in the DIGI package are available to all students regardless of the field of study, and they can be taken independently of each other.