The risks of using ChatGPT to obtain common safety-related information and advice

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

The risks of using ChatGPT to obtain common safety-related information and advice. / Oviedo-Trespalacios, Oscar; Peden, Amy E.; Cole-Hunter, Thomas; Costantini, Arianna; Haghani, Milad; Rod, J. E.; Kelly, Sage; Torkamaan, Helma; Tariq, Amina; David Albert Newton, James; Gallagher, Timothy; Steinert, Steffen; Filtness, Ashleigh J.; Reniers, Genserik.

I: Safety Science, Bind 167, 106244, 2023.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Oviedo-Trespalacios, O, Peden, AE, Cole-Hunter, T, Costantini, A, Haghani, M, Rod, JE, Kelly, S, Torkamaan, H, Tariq, A, David Albert Newton, J, Gallagher, T, Steinert, S, Filtness, AJ & Reniers, G 2023, 'The risks of using ChatGPT to obtain common safety-related information and advice', Safety Science, bind 167, 106244. https://doi.org/10.1016/j.ssci.2023.106244

APA

Oviedo-Trespalacios, O., Peden, A. E., Cole-Hunter, T., Costantini, A., Haghani, M., Rod, J. E., Kelly, S., Torkamaan, H., Tariq, A., David Albert Newton, J., Gallagher, T., Steinert, S., Filtness, A. J., & Reniers, G. (2023). The risks of using ChatGPT to obtain common safety-related information and advice. Safety Science, 167, [106244]. https://doi.org/10.1016/j.ssci.2023.106244

Vancouver

Oviedo-Trespalacios O, Peden AE, Cole-Hunter T, Costantini A, Haghani M, Rod JE o.a. The risks of using ChatGPT to obtain common safety-related information and advice. Safety Science. 2023;167. 106244. https://doi.org/10.1016/j.ssci.2023.106244

Author

Oviedo-Trespalacios, Oscar ; Peden, Amy E. ; Cole-Hunter, Thomas ; Costantini, Arianna ; Haghani, Milad ; Rod, J. E. ; Kelly, Sage ; Torkamaan, Helma ; Tariq, Amina ; David Albert Newton, James ; Gallagher, Timothy ; Steinert, Steffen ; Filtness, Ashleigh J. ; Reniers, Genserik. / The risks of using ChatGPT to obtain common safety-related information and advice. I: Safety Science. 2023 ; Bind 167.

Bibtex

@article{796f3f8ae34d46a5a0ef34a536263867,
title = "The risks of using ChatGPT to obtain common safety-related information and advice",
abstract = "ChatGPT is a highly advanced AI language model that has gained widespread popularity. It is trained to understand and generate human language and is used in various applications, including automated customer service, chatbots, and content generation. While it has the potential to offer many benefits, there are also concerns about its potential for misuse, particularly in relation to providing inappropriate or harmful safety-related information. To explore ChatGPT's (specifically version 3.5) capabilities in providing safety-related advice, a multidisciplinary consortium of experts was formed to analyse nine cases across different safety domains: using mobile phones while driving, supervising children around water, crowd management guidelines, precautions to prevent falls in older people, air pollution when exercising, intervening when a colleague is distressed, managing job demands to prevent burnout, protecting personal data in fitness apps, and fatigue when operating heavy machinery. The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT provided incorrect or potentially harmful statements and emphasised individual responsibility, potentially leading to ecological fallacy. The study highlights the need for caution when using ChatGPT for safety-related information and expert verification, as well as the need for ethical considerations and safeguards to ensure users understand the limitations and receive appropriate advice, especially in low- and middle-income countries. The results of this investigation serve as a reminder that while AI technology continues to advance, caution must be exercised to ensure that its applications do not pose a threat to public safety.",
keywords = "Artificial intelligence, Chatbot, Human-AI interaction, Responsible risk management, Risk communication, Safety science",
author = "Oscar Oviedo-Trespalacios and Peden, {Amy E.} and Thomas Cole-Hunter and Arianna Costantini and Milad Haghani and Rod, {J. E.} and Sage Kelly and Helma Torkamaan and Amina Tariq and {David Albert Newton}, James and Timothy Gallagher and Steffen Steinert and Filtness, {Ashleigh J.} and Genserik Reniers",
note = "Publisher Copyright: {\textcopyright} 2023 The Author(s)",
year = "2023",
doi = "10.1016/j.ssci.2023.106244",
language = "English",
volume = "167",
journal = "Safety Science",
issn = "0925-7535",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - The risks of using ChatGPT to obtain common safety-related information and advice

AU - Oviedo-Trespalacios, Oscar

AU - Peden, Amy E.

AU - Cole-Hunter, Thomas

AU - Costantini, Arianna

AU - Haghani, Milad

AU - Rod, J. E.

AU - Kelly, Sage

AU - Torkamaan, Helma

AU - Tariq, Amina

AU - David Albert Newton, James

AU - Gallagher, Timothy

AU - Steinert, Steffen

AU - Filtness, Ashleigh J.

AU - Reniers, Genserik

N1 - Publisher Copyright: © 2023 The Author(s)

PY - 2023

Y1 - 2023

N2 - ChatGPT is a highly advanced AI language model that has gained widespread popularity. It is trained to understand and generate human language and is used in various applications, including automated customer service, chatbots, and content generation. While it has the potential to offer many benefits, there are also concerns about its potential for misuse, particularly in relation to providing inappropriate or harmful safety-related information. To explore ChatGPT's (specifically version 3.5) capabilities in providing safety-related advice, a multidisciplinary consortium of experts was formed to analyse nine cases across different safety domains: using mobile phones while driving, supervising children around water, crowd management guidelines, precautions to prevent falls in older people, air pollution when exercising, intervening when a colleague is distressed, managing job demands to prevent burnout, protecting personal data in fitness apps, and fatigue when operating heavy machinery. The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT provided incorrect or potentially harmful statements and emphasised individual responsibility, potentially leading to ecological fallacy. The study highlights the need for caution when using ChatGPT for safety-related information and expert verification, as well as the need for ethical considerations and safeguards to ensure users understand the limitations and receive appropriate advice, especially in low- and middle-income countries. The results of this investigation serve as a reminder that while AI technology continues to advance, caution must be exercised to ensure that its applications do not pose a threat to public safety.

AB - ChatGPT is a highly advanced AI language model that has gained widespread popularity. It is trained to understand and generate human language and is used in various applications, including automated customer service, chatbots, and content generation. While it has the potential to offer many benefits, there are also concerns about its potential for misuse, particularly in relation to providing inappropriate or harmful safety-related information. To explore ChatGPT's (specifically version 3.5) capabilities in providing safety-related advice, a multidisciplinary consortium of experts was formed to analyse nine cases across different safety domains: using mobile phones while driving, supervising children around water, crowd management guidelines, precautions to prevent falls in older people, air pollution when exercising, intervening when a colleague is distressed, managing job demands to prevent burnout, protecting personal data in fitness apps, and fatigue when operating heavy machinery. The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT provided incorrect or potentially harmful statements and emphasised individual responsibility, potentially leading to ecological fallacy. The study highlights the need for caution when using ChatGPT for safety-related information and expert verification, as well as the need for ethical considerations and safeguards to ensure users understand the limitations and receive appropriate advice, especially in low- and middle-income countries. The results of this investigation serve as a reminder that while AI technology continues to advance, caution must be exercised to ensure that its applications do not pose a threat to public safety.

KW - Artificial intelligence

KW - Chatbot

KW - Human-AI interaction

KW - Responsible risk management

KW - Risk communication

KW - Safety science

U2 - 10.1016/j.ssci.2023.106244

DO - 10.1016/j.ssci.2023.106244

M3 - Journal article

AN - SCOPUS:85166335301

VL - 167

JO - Safety Science

JF - Safety Science

SN - 0925-7535

M1 - 106244

ER -

ID: 371617405