Digitalisation and education
Thomas Hardwig: Introduction of digital technology in schools as a challenge for socio-technical system design
Anita Wagner: Integration of AI in education: systematic implementation at Austrian AI-pilot schools
Nelson Bruno Martins Marques Costa: Never try to build a house from the roof – fair use of AI in education
Anja Gerlmaier & Paul-Fiete Kramer: How human friendly is ChatGPT for knowledge workers? Analyzing opportunities and risks of generative AI with the FriendlyTechCheck (FTC)
Ulrike Bollman: Artificial intelligence and education – a teacher-centred approach to safety and health
Introduction of digital technology in schools as a challenge for
socio-technical system design
Thomas Hardwig
Cooperation Centre for Universities and Trade Unions, Georg-August University of Göttingen, Germany
In 2020, the year of the coronavirus pandemic, there was a surge in digitalization in German schools. While 39% stated that they used digital media in class every day immediately before the pandemic, this figure had risen to 68% one year later. It had previously taken seven years to achieve comparable growth. Unsurprisingly, 70% of teachers reported experiencing a higher (54%) or much higher (16%) level of stress due to digitalization. Only 8% experienced a reduction in workload. It could therefore be assumed that the more intensive use of digital technology is also associated with a higher health risk. However, this is not the case; neither psychological exhaustion (burnout) nor wellbeing are directly linked to the intensity of technology use. The relationship between digitalization and teachers' health is proving to be somewhat more complex. This is shown by a study in which 2,750 teachers from 233 schools in Germany were surveyed (Mußmann et al. 2021).
The results underline ENETOSH’s approach of integrating human, organizational, technical and environmental factors into occupational safety and health.
The survey results showed that it's not the level of digitalization itself that determines health risks, but rather the quality of work and technology design during the introduction of digital tools. This is also in line with the socio-technical systems approach, which focuses on the interaction between people (social aspects) and technology within an organizational system, aiming to optimize both human well-being and system performance. How does this manifest itself in the survey data?
First of all, there are major differences in how well schools manage to realize digital teaching and learning. Schools must not only acquire technology, but also develop a strategy for digital teaching and learning and build a digital infrastructure that can effectively support teachers in digitally assisted teaching. When we ask teachers how "mature" their school is in this respect, we get very different answers: The assessments of all teachers in a school are summarized and assigned to four maturity types. It is becoming apparent that there is a digital divide between Germany's schools: Digital latecomer schools (33%) do not have a digital strategy and the digital infrastructure is prone to failure and does little to support digital teaching. In more digitally mature schools, the situation is significantly better, but only 12% of schools are digital forerunners with the highest digital maturity.
Correlation analyses now show that the wellbeing (WHO-5) of teachers in digital forerunner schools is significantly higher than in less mature schools. In digitally mature schools, however, the intensity of use of digital technology is higher than in digital latecomer schools. This can be explained by the fact that the technology works more smoothly and the work is better supported so that there is less additional stress.
Additional stress caused by digital technology is understood as technostress. For example, if the technology does not function smoothly and forces teachers to solve technical problems instead of teaching (role ambiguity). Or if they need to develop a backup plan when planning lessons so that they have an alternative in the event of technical failures. Technostress is also caused by an increased workload if the technology is too complex (overload) or requires additional learning (complexity) for which the work does not allow time. Technostress is caused by technical deficits as well as a lack of training for teachers and a lack of technical support in the event of problems. Technostress has proven to be a risk factor for health, as it is significantly associated with higher mental exhaustion (Burnout). Teachers in less digitally mature schools experience higher levels of technostress and therefore greater health risks.
What can be done to prevent health risks from digitalization in schools?
Even if expectations that digitalization will help to reduce the workload have been disappointed so far, it would be wrong to reject digitalization for health reasons. Schools must implement a digital strategy in order to prepare their students for a future in an increasingly digitalized world. In order to protect the health of teachers, it is therefore important to optimize the interplay of social, organizational and technical conditions in the implementation of digitally supported teaching and learning. This can effectively reduce technostress and improve wellbeing of teachers. Health risks of digitalization should be tackled much more consistently. In schools, the implementation of digitally supported teaching and learning should be realized as part of participatory school development processes. Teachers must be involved in shaping the use of technology.
If digital strategies and technical infrastructures can be improved, there is a chance for a healthier school. However, traditional occupational safety and health measures remain essential. Teachers' working time systems need to be reformed – at least in Germany – to reduce workloads. Mental health risk assessments must be carried out regularly, and these should include participatory processes. Systematic training remains crucial.
References
ENETOSH (2023). The impact of digitalisation on the health and safety of teachers (accessed 21.10.2024). https://www.enetosh.net/enetosh-events-reader/the-impact-of-digitalisation-on-the-health-and-safety-of-teachers.html
Hardwig, T & Mußmann, F (2021). Enforced digitalisation in Germany's schools - results of a survey of 2,750 teachers from all over Germany: Presentation at the online-conference WORK2021 part III Turku (Finland) 9th of December 2021. (accessed 21.10.2024) https://kooperationsstelle.uni-goettingen.de/fileadmin/digitalisierungsschub_ein_europaweites_thema/dokumentation/kooperationsstelle/21_12_09_Presentation_Enforced_Digitalisation_-Turku_WORK2021_III.pdf
Mußmann, F; Hardwig, T; Riethmüller, M & Klötzer, S (2021). Digitalisierung im Schulsystem: Arbeitszeit, Arbeitsbedingungen, Rahmenbedingungen und Perspektiven von Lehrkräften in Deutschland; Ergebnisbericht [Digitalisation in the school system - working hours, working conditions, framework conditions and prospects for teachers in Germany; Results report]. Göttingen: Kooperationsstelle Hochschulen und Gewerkschaften der Georg-August- Universität Göttingen. https://doi.org/10.3249/ugoe-publ-10
Integration of AI in education: systematic implementation at Austrian
AI-pilot schools
Anita Wagner
Austrian Workers' Compensation Board (AUVA)
The UNESCO Global Education Monitoring Report 2023 emphasises the importance of the use of technology in education and praises countries that proactively use AI and other digital technologies to improve the education system. Austria is cited as a positive example in this context, particularly with regard to the implementation of initiatives such as the ‘AI pilot schools’. As part of a large-scale education initiative, eEducation Austria has selected 100 schools as so-called ‘AI pilot schools’ in recent years. These schools have the task of testing, evaluating and improving the use of AI technologies in the classroom. The results of these pilot projects could be groundbreaking for the entire education system.
The integration of AI in education not only brings potential for improved teaching methods and individualized support, but also raises important questions about ethical use and data security. It is crucial that we thoroughly evaluate these new technologies not only from an educational perspective, but also with regard to the safety and protection of those involved. The experiences and findings from the AI pilot schools could therefore also point the way forward for the work of the Austrian Workers' Compensation Board (AUVA), particularly with regard to prevention measures and ensuring a safe learning environment.
Objectives of the AI Pilot Schools
The introduction of AI in schools aims to achieve several key objectives. On one hand, it seeks to support teachers by providing them with tools that ease the preparation, delivery, and follow-up of lessons. On the other hand, students should be given the opportunity to learn independently and in a personalized manner. AI-driven systems can analyze learning progress, provide tailored recommendations, and specifically target students' needs to close learning gaps.
Additionally, a significant focus is placed on ensuring that students learn to handle AI technologies critically and competently from an early age. In an increasingly digital world, understanding how AI works, and its implications is essential for becoming informed and responsible citizens.
Practical Implementation
Various technologies are being used at the AI pilot schools in Austria. Examples include:
- Adaptive Learning Platforms: These platforms analyze students' learning progress in real time and automatically adjust the curriculum to their needs. Weaknesses can be identified early and addressed effectively.
- Speech Assistance Systems: In foreign language classes, AI-powered speech assistants are used to improve students’ listening comprehension and pronunciation. These systems provide personalized feedback and help make lessons more interactive.
- Automated Grading Systems: Homework and tests are analyzed and graded by AI systems. This relieves teachers and allows for quick feedback to students. Particularly in subjects like mathematics and science, such systems have proven to be reliable.
- Data Analysis for Lesson Planning: Teachers use AI-driven analytical tools to optimize their teaching materials and methods. These systems can identify patterns in learning behavior and provide recommendations based on them.
Challenges and Ethical Considerations
The introduction of AI in education brings not only benefits but also raises critical questions. Data protection and the security of students' information are of particular concern. It is essential to ensure that sensitive data is not misused. Furthermore, there is the question of how extensively AI should influence the classroom. Critics warn against over-technologizing education, where human interaction and the pedagogical aspect might be neglected.
Teachers also need extensive training to effectively and meaningfully implement the new technologies. The technological shift should not lead to additional burdens but should be seen as support in the daily teaching routine.
Initial Results and Prospects
Experience to date at the AI pilot schools in Austria has been largely positive. Both teachers and pupils report increased motivation and better learning outcomes. The adaptive adjustment of lessons to individual learning needs is particularly emphasized. It has also been shown that AI systems are a valuable addition to conventional teaching methods, provided they are used in a targeted manner.
In the long term, it will be crucial to systematically evaluate the findings from the pilot projects and integrate them into the entire school system. As part of the AUVA digitalization campaign “Together safely digital”, companies, employees and young people are informed about the safe use of digital technologies in the workplace.
The initiative focuses on providing information, training and practical tools to minimize risks in the digital world of work. The campaign focuses on preventing digital overload, dealing safely with cyber risks and protecting sensitive data. The aim is to make the digital transformation safe and healthy.
In summary, it can be said that the AI pilot schools in Austria represent an exciting development that has the potential to change the education system in the long term. The intelligent use of AI could increase equal opportunities, improve individual support and create new freedom for teachers - provided that the ethical and pedagogical challenges continue to be carefully considered.
Further links
List of AI pilot schools in Austria Gesamt_short.pdf (eeducation.at)
Homepage of the AUVA campaign “Safely digital together” https://auva.at/praevention/kampagnen/gemeinsam-sicher-digital/
UNESCO Global Education Monitoring Report 2023: Global education monitoring report, 2023: technology in education: a tool on whose terms? - UNESCO Digital Library
Never try to build a house from the roof – fair use of AI in education
Nelson Bruno Martins Marques Costa
Ergonomics and Human Factors Professor at the University of Minho, Portugal
Introduction
Safety is often defined as the study, assessment, and control of operating risks, focussing on accident prevention.
Teachers / trainers are frequently faced with the need to lecture about specific topics, like accident, and incident investigations as key ingredients of safety management. Erik Hollnagel (2002) addressed the need for characterization / systematization of accident models and proposed three different types, as presented in Table 1.
TABLE 1 - Main types of Accident Models (adapted from Hollnagel, E. 2002)
Model type | Search principle | Analysis goals | Example |
Sequential models | Specific causes and well-defined links | Eliminate or contain causes | Domino theory (Heinrich) |
Epidemiological models | Carriers, barriers, and lateen conditions | Make defences and barriers stronger | Swiss cheese (Reason) |
Systemic models | Tight couplings and complex interactions | Monitor and control performance variability | Functional Resonance Accident Model (Hollnagel) |
Systematization of information is essential for the teaching / learning process, as students / learners can easily become overwhelmed by the new information.
We know that AI-based technologies can offer tools to collect, select and resume large information sets, and, for that reason, can be an interesting tool to use in classes / training sessions.
Nevertheless, the use of AI-based technologies in education and training sets a new, and demanding, challenge to teachers / trainers.
Build a house from the roof
The new, and demanding, challenges are closely related to the fact that information seems to be at an arm’s reach. If teachers / trainers propose a task, like writing a small essay or summarizing information related to a specific topic, students / learners will, most of the time, choose the easy way. Prompting an AI-based tool to do the work for them, build a house from the roof, without considering the house foundations, walls, etc.
One may ask, is this a bad thing? We believe that the first question should be: Are these AI-based tools equally accessible to all our students / learners?
When it comes to access to AI-based technologies, we must ensure that all students / learners have the same opportunities.
Taking the example of the popular OpenAI's ChatGPT, Ayesha Saleem (2023) presents us an insightful comparison between GPT-3.5 and GPT-4. In table 2, we can easily devise differences between the two AI tools.
TABLE 2 - Comparative analysis GPT-3.5 versus GPT-4 (adapted from Saleem A. 2023)
Feature | GPT-3.5 | GPT-4 |
Database Size | Large, but smaller than GPT-4 | 10 times larger than GPT-3.5, substantially larger database |
Understanding Context |
Capable, but with limitations | Better comprehension of context |
Factual Accuracy | Good, but less accurate compared to GPT-4 | 40% more likely to produce factual responses than GPT-3.5 |
Multimodality | Primarily text-based | Multimodal - can accept and produce text and image inputs and outputs |
Task Complexity | Handles complex tasks, but less so than GPT-4 | Capable of more complex tasks like writing essays, creating art and music |
Figure 1 presents additional performance differences between GPT-3.5 and GPT-4. A quick analysis of the provided information enlightens about a significant performance difference between the two AI tools.
Figure 1 - Print from OpenAI official page (2024, September 5. https://openai.com/index/gpt-4/)
In a “plain field” we would be able to provide our students / learners with access to both tools. On OpenAI’s official web page, we can see that the GPT-3.5 version is free to use, but access to the GPT-4 version is costly.
Levelling the field
If we ask ourselves the question “is the play field levelled”, the answer would be, probably not for all the students / learners in our classes. Therefore, a special attention should be placed on this, and asking students to use AI-based technologies should be reasoned fairly.
One way to “level the field” would be to place a clear frame on the use of AI-based technologies. Proposing: a clear research question; the use of a specific tool (like GPT, Gemini, or other); and imposing the complementary use of peer-reviewed literature (such as books or research papers). Figure 2 presents the framework schematics.
Figure 2 - Schematics of the proposed framework
This approach would allow promoting the use of critic judgment upon the outputs of the AI-based technology and raise awareness to the incorrect predictions (a.k.a. hallucinations) that may occur in response to user’s prompts. This is particularly important because incorrect answers are presented as if they are factual and correct. And, if the reader is not proficient in that specific topic, incorrect answers can be learned and later used to solve real world problems.
Conclusions
To sum up, crossing AI generated answers with sound literature is of upmost importance and can be a task included in the challenge presented to the students / learners.
This hybrid approach may lead to more significant learning outcomes, also avoiding “building a house from the roof”.
References
Hollnagel, E. (2022). Understanding accidents-from root causes to performance variability. Proceedings of the IEEE 7th Conference on Human Factors and Power Plants, Scottsdale, AZ, USA. doi: https://ieeexplore.ieee.org/document/1042821
Saleem, A. (2023, November 30). GPT-3.5 and GPT-4 comparative analysis. Data Science Dojo. https://datasciencedojo.com/blog/gpt-3-5-and-gpt-4-comparative-analysis/
How human friendly is ChatGPT for knowledge workers? Analyzing opportunities and risks of generative AI with the FriendlyTechCheck (FTC)
Anja Gerlmaier & Paul-Fiete Kramer
Institute for Work, Skills and Training, University of Duisburg-Essen, Germany
Introduction
Since the launch of ChatGPT at the end of 2022, large language models (LLMs) rapidly became a prominent technological phenomenon. The model's ability to process and generate human-like text responses makes it an innovative working tool for use in several professional settings, especially knowledge-based professions (Ali et al., 2024). Studies based on American data suggests, that occupations in areas of sales, education and research, judiciary and administration may face more exposure to advances in generative Al (Felten et al., 2023). In this capacity generative AI like ChatGPT can provide on-demand explanations and translations. It offers guidance on various academic topics or generates source code or content.
GPT in the chatbot stands for Generative Pre-Trained Transformer. It is a dialogical language system that can anticipate linguistic patterns to questions posed with an extremely large amount of data (Schönbächler et al., 2023). Generative AI such as ChatGPT and similar systems, e.g. Gemini, Copilot or Perplexity are trained on the basis of immense amounts of data. For their use, it is important to know that the common language models encode probable word sequences in context. The results are based on linguistic probabilities of word sequences that are found in the training material. The current GPT applications are susceptible to so-called hallucinations, which includes the creation of non-real facts or quotes. In addition to this, the quality of training material has an enormous influence on content output. Discrimination or prejudices can appear in the training data (biases), and this is not necessarily recognizable for the user.
However, large language models like ChatGPT do not have a model of meaning or factual knowledge like expert systems. In so far, the usage of large language models as a working tool in knowledge-based occupations is currently the subject of controversial discussion (Mogavi et al., 2023). Some authors point out, that ChatGPT as a working tool can strengthen the productivity of knowledge workers (e.g. researchers, see Khlaif et al., 2023) and helps reducing high work load (e.g. teachers, see Ali et al., 2024). Other authors reject the use of GPT based applications in academic positions due to problems of academic integrity, risk of automation of academic tasks or overreliance on technology (Mijwil, 2023; Gmyrek et al., 2023; Hagendorff & Fabi, 2023).
In view of the described risks responsible persons are also requested to create policies and good practices for a human friendly and responsible use of AI in their organization units. To achieve this, it is crucial to involve employers in AI adopting in an early stage: on the one hand it is easy and comparatively inexpensive for most employees to experiment with generative AI tools. Insofar, responsible persons can get crucial information about potential and risks of AI tools from this ‘early adopters’. On the other hand, it must be considered that unauthorized use of (free) AI tools can have negative outcomes for both organizations and workers because of unreliable outcomes generated by these chatbots. Research about GPT usage in knowledge-based jobs and its consequences for work quality and training is at a starting point (Gmyrek et al., 2023). For this reason, further research is necessary to identify job specified risks and opportunities of generative AI at work (Hosseini et al., 2023).
This is where our study comes in. We ask what opportunities and risks employees in the field of highly qualified knowledge work (i.e. a field of work that is particularly exposed to possible use of AI) perceive when using generative AI tools. From a socio-technical system perspective it is important to bear in mind that the use of new technologies does not necessarily lead to improvements or deteriorations of work quality and well-being (Parker & Grote, 2020). Improvements of productivity and work quality (‘joint optimization’) can only be reached, when new technologies fit with work flows, client’s needs, workers job identification and qualification (Winby & Mohrman, 2018; Appelbaum, 1997). Following this aim, the “HUMAINE” project started in 2021 to develop methods for human centred deployment of AI in work places. In this context, the University of Duisburg-Essen investigated in the dialogue tool ‘FriendlyTechCheck’(FTC). This tool will support organizations to identify psycho-social risks and opportunities of AI based technologies at work places. Its use should be taken into account when technological changes such as the implementation of robotics or other AI based systems are planned.
In this article we report on first experiences using FTC to identify psycho-social risks and opportunities of ChatGPT in high level knowledge work. Therefore, we present the method of the FTC and its theoretical framework. Following that, we present some findings of psycho-social risks and opportunities of GPT deployment we observed in a case study with a researcher and development team. Finally, we present requirements the team pointed out for a human friendly and responsible AI usage in their research institute.
The complete article has been published as ENETOSH Factsheet 07.
Artificial intelligence and education – a teacher-centred approach to safety and health
Ulrike Bollmann
German Social Accident Insurance (DGUV) - ENETOSH
For a long time, the topic of digitization in schools was mainly viewed from the perspective of the students. It was only the ad hoc digitization during the global Covid-19 pandemic that brought the focus more to the teachers.
The pandemic caught the education system, teachers and students completely unprepared. Apparently, no one had expected that the conditions for education and upbringing could be so fundamentally challenged. At the same time, the pandemic has been a significant catalyst for questions of safety, health and well-being, particularly in the education system.
Until now, teacher health has been addressed in several separate discourses: the discourse on teacher health, which focuses on mental health and early retirement, as part of the concept of teacher well-being (TWB), and through initial analyses of the effects of the pandemic and digitalization on teachers' working conditions.
Thanks to the European Agency for Safety and Health at Work (EU-OSHA), the impact of digitalization on the safety and health of teachers is now a central topic of the Europe-wide campaign 'Safe and Healthy Work in the Digital Age 2023-2025'.
When the project manager responsible for the content of the campaign approached me in November 2022 about creating an expert paper on this topic, the role of artificial intelligence (AI) and the significance of the future European AI law for the integration of safety, health and well-being in the education sector were not yet very present.
Between June 2023 and February 2024, the EU-OSHA report “Artificial Intelligence and Education - A Teacher-Centred Approach to Safety and Health” (OSHA/DC/8421), which was published in August 2024, was produced in parallel with current developments.
A key message of the report is that it is not digitalization itself that leads to higher levels of stress and strain for teachers, but rather the lack of a strategy for integrating digitalization into the school system. If digitalization is integrated systematically and with consideration for both the potential and the risks for the safety, health and well-being of teachers, stress can be reduced.
Another key message of the report aims to refocus the debate on the impact of digitalization on teachers: the critical issue is not only how the school system recovers from disruptive changes such as the Covid-19 pandemic or the public availability of generative AI (e.g., ChatGPT), but also how it emerges strengthened from such crises and continues to evolve in the face of further disruptive developments.
The full report can be found here.