ABCDEFGHIJKLMNOPQRSTUVWXYZAA
1
Article TitleAuthorsYearSource titleDocument TypeDOIAbstract
2
A systematic review on trends in using Moodle for teaching and learningGamage S.H.P.W., Ayres J.R., Behrend M.B.,2022International Journal of STEM EducationReview10.1186/s40594-021-00323-x
Background: The Moodle Learning Management System (LMS) is widely used in online teaching and learning, especially in STEM education. However, educational research on using Moodle is scattered throughout the literature. Therefore, this review aims to summarise this research to assist three sets of stakeholders—educators, researchers, and software developers. It identifies: (a) how and where Moodle has been adopted
3
eSurgery—digital transformation in surgery, surgical education and training: survey analysis of the status quo in GermanyKröplin J., Huber T., Geis C., Braun B., Fritz T.,2022European Surgery - Acta Chirurgica AustriacaArticle10.1007/s10353-022-00747-x
Background: In surgery, electronic healthcare systems offer numerous options to improve patient care. The aim of this study was to analyse the current status of digitalisation and its influence in surgery, with a special focus on surgical education and training. Methods: An individually created questionnaire was used to analyse the subjective assessment of the digitalisation processes in clinical surgery. The online questionnaire consisted of 16 questions regarding the importance and the corresponding implementation of the teaching content: big data, health apps, messenger apps, telemedicine, data protection/IT security, ethics, simulator training, economics and e‑Learning were included. The participation link was sent to members of the German Society of Surgery via the e‑Mail distribution list. Results: In total, 119 surgeons (response rate = 19.8%) took part in the survey: 18.5% of them were trainees (TR), and 81.5% had already completed specialist training (SP). Overall, 66.4% confirmed a positive influence of digitalisation on the quality of patient care. The presence of a surgical robot was confirmed by 47.9% of the participants. A further 22.0% (n = 26) of the participants confirmed the possibility of using virtual simulators. According to 79.0% of the participants, the integration of digital technologies in surgical education for basic and advanced stage surgeons should be aimed for. Data protection (1.7) and e‑Learning (1.7) were rated as the most important teaching content. The greatest discrepancy between importance and implementation was seen in the teaching content of big data (mean: 2.2–3.8). Conclusion: The results of the survey reveal the particular importance of digitalisation content for surgery, surgical education and training. At the same time, the results underline the desire for increased integration of digital competence teaching. The data also show an overall more progressive and optimistic perception of TR. In order to meet the challenges of the digital transformation, the implementation of suitable curricula, including virtual simulation-based training and blended-learning teaching concepts, should be emphasised. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature.
4
Learning and career development in neurosurgery: Values-based medical educationAmmar A.,2022Learning and Career Development in Neurosurgery: Values-Based Medical EducationBook10.1007/978-3-031-02078-0
The neurosurgical, surgical and medical training and practice models have to keep up with the technological revolution in the 21st Century as our lives changed on a swift base. Making bioethics and metacognition a cornerstone in medical education and practice will flourish our humane societies. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022. All rights reserved.
5
International Prospects and Trends of Artificial Intelligence Education: A Content Analysis of Top-level AI Curriculum across CountriesZhou Y., Zhan Z., Liu L., Wan J., Liu S., Zou X.,2022ACM International Conference Proceeding SeriesConference Paper10.1145/3568739.3568796
This study intends to investigate the present situation of AI curriculum offered for grades K-12. We screened 11 representative countries and areas from six continents and assessed the content of their top K-12 AI courses in terms of teaching content and teaching implementation in order to comprehend the current state of K-12 AI courses in diverse nations. Provide some ideas and suggestions for the development of AI courses for students in grades K-12. (1) Countries may choose AI applications, AI influences in various aspects, AI ethics, machine learning, data, classification, reasoning, Identify, and other content to establish independent AI teaching content standards
6
MOOC 5.0: A Roadmap to the Future of LearningAhmad I., Sharma S., Singh R., Gehlot A., Priyadarshi N., Twala B.,2022Sustainability (Switzerland)Review10.3390/su141811199
Industry 4.0 has created a whole new world for us to explore, and its effects can be seen in every facet of our lives, especially in the workplace where it calls for technology-driven employment. There is a growing need to teach individuals and assist them in transitioning to longer-term employment prospects to execute Industry 4.0 effectively. Although MOOCs revolutionized the way learners study, it is critical to investigate teaching techniques using Education 4.0 at this time. This article explores how the technologies of Industry 4.0 can be incorporated into MOOCs. This paper proposes MOOCs 5.0, whose features include better universal access, better learner engagement, adaptive learning, greater collaboration, security, and curiosity, which is being developed using Industry 4.0 technologies of the Internet of Things, Cloud Computing, Big Data, Artificial Intelligence/Machine Learning, Blockchain, Gamification Technologies, and the Metaverse and would incorporate the zones of ethics and humanism, while at the same time providing learners with a richer and more individualized experience. © 2022 by the authors.
7
Exploring Teachers’ Perceptions of Artificial Intelligence as a Tool to Support their Practice in Estonian K-12 EducationChounta I.-A., Bardone E., Raudsep A., Pedaste M.,2022International Journal of Artificial Intelligence in EducationArticle10.1007/s40593-021-00243-5
In this article, we present a study on teachers’ perceptions about Artificial Intelligence (AI) as a tool to support teaching in Estonian K-12 education. Estonia is promoting technological innovation in education. According to the Index of Readiness for Digital Lifelong Learning (IRDLL), Estonia was ranked first among 27 European countries. In this context, our goal was to explore teachers’ perceptions about cutting-edge technologies (in this case, AI) and to contextualize our results in the scope of Fairness, Accountability, Transparency and Ethics (FATE). We carried out a survey with 140 Estonian K-12 teachers and we asked them about their understanding and concerns regarding the use of AI in education and the challenges they face. The analysis of the survey responses suggests that teachers have limited knowledge about AI and how it could support them in practice. Nonetheless, they perceive it as an opportunity for education. The results indicate that teachers need support in order to be efficient and effective in their work practice
8
Educating Software and AI Stakeholders About Algorithmic Fairness, Accountability, Transparency and EthicsBogina V., Hartman A., Kuflik T., Shulner-Tal A.,2022International Journal of Artificial Intelligence in EducationArticle10.1007/s40593-021-00248-0
This paper discusses educating stakeholders of algorithmic systems (systems that apply Artificial Intelligence/Machine learning algorithms) in the areas of algorithmic fairness, accountability, transparency and ethics (FATE). We begin by establishing the need for such education and identifying the intended consumers of educational materials on the topic. We discuss the topics of greatest concern and in need of educational resources
9
The Green Escape Room: Part 2 - Teaching Students Professional Engineering Ethics by Applying Environmental Engineering Principles and Deciphering Clues and Puzzles
Newhart K.B., Pfluger A.R., Butkus M.A.,2022ASEE Annual Conference and Exposition, Conference ProceedingsConference Paper
Escape rooms use a sequence of related clues and puzzles to lead participants to a final answer. While escape rooms have been used in technical aspects of engineering education as an active learning exercise, very few have been applied to ethics and none to engineering ethics as reported in the literature. Conventional ethics education is often taught by lecture and passive analysis of case studies which does not actively engage students with ethical principles or codes like the National Society of Professional Engineers (NSPE) Code of Ethics. The objective of this work is to evaluate escape rooms as a tool to improve student's understanding of professional engineering ethics. The escape room exercise in this study is geared towards environmental engineering students, engaging them with relevant subject-matter problems including water treatment, wastewater treatment, and solid waste management in the developing world. Each technical problem is compounded by an ethical dilemma and participants must justify their final action to resolve each problem by using the NSPE Code of Ethics. To measure student learning, a NSPE-developed, 25-question, True-False quiz designed for professional engineers is administered immediately before and after the escape room exercise. Of 17 participants, the ethics escape room improved the average participant's grade on the NSPE quiz by 7.8% (p=0.002). All participants agreed or strongly agreed that the ethics escape room was “effective as a learning tool,” “should become a regular part of ethics education,” and “encouraged team building,” on a feedback form administered prior to the post-quiz. This work demonstrates the effectiveness of the escape room as a format for active learning in engineering ethics education and provides an outline for ethics education in a wide range of professional disciplines. © American Society for Engineering Education, 2022.
10
Artificial Intelligence Potential in Higher Education Institutions Enhanced Learning Environment in Romania and SerbiaBucea-Manea-țoniş R., Kuleto V., Gudei S.C.D., Lianu C., Lianu C., Ilić M.P., Păun D.,2022Sustainability (Switzerland)Article10.3390/su14105842
In their struggle to offer a sustainable educational system and transversal competencies for market requests, significant transformations characterise the higher education system in Serbia and Romania. According to EU policy, these transformations are related to educational reforms and the introduction of new technology and methodologies in teaching and learning. They are expected to answer to the PISA requirements and to increase the DESI (Digital Economy and Society Index). They are also likely to mitigate the inequity of HEIs (higher education institutions), empowered by a structured, goal-oriented strategy towards agile management in HEIs that is also appropriate for new market demands. Our study is based on an exploratory survey applied to 139 Romanian and Serbian teachers from the Information Technology School—ITS, Belgrade, and Spiru Haret University, Romania. The survey let them provide their knowledge of AI or their perceptions of the difficulties and opportunities of these technologies in HEIs. Our study discovered how difficulties and opportunities associated with AI impact HEIs. This study aims to see how AI might assist higher education in Romania and Serbia. We also considered how they might be integrated with the educational system, and if instructors would utilise them. Developing creative and transversal skills is required to anticipate future breakthroughs and technological possibilitiesThe new methods of education focuses on ethics, values, problem-solving, and daily activities. Students’ learning material, how they might achieve critical abilities, and their educational changes must be addressed in the future. In this environment, colleges must create new digital skills in IA, machine learning, IoT, 5G, the cloud, big data, blockchain, data analysis, using MS Office and other applications, MOOCs, simulation applications, VR/AR, and gamification. They must also develop cross-disciplinary skills and a long-term mindset. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
11
Understanding the possibilities and conditions for instructor-AI collaboration in entrepreneurship educationAla M., Robin M., Rasul T., Wegner D.,2022Technology and Entrepreneurship Education: Adopting Creative Digital Approaches to Learning and TeachingBook Chapter10.1007/978-3-030-84292-5_7
This chapter discusses the expected learning outcomes in entrepreneurship education, including creativity, innovation, industry-specific knowledge, decision-making, risk-taking, problem-solving, leadership qualities, ethics, and social responsibility. It examines whether the conventional entrepreneurial curriculums successfully contribute to the academic and social goals. It also presents a discussion on why the recent socio-cultural, technological, and pandemic-related changes, including mass digitalization, working remotely, asynchronicity, and global communities of practice, demand new approaches to post-secondary entrepreneurship education. It further explores artificial intelligence (AI), such as the virtual classroom, AI tutor, interactive smart boards, augmented reality (AR), virtual reality (VR), simulation, and big data systems as a disruptive technology in education. While computer systems with 'intelligence' are already performing many tasks that were commonly associated with humans, there are growing interests, concerns, and uncertainty regarding the wider application of AI in education. A discussion on the trends in AI adoption in education and how AI is likely to reshape curriculums, teaching, and assessment, as well as its impacts on teaching and learning, is considered. Furthermore, it explores the enormous potential of AI specifically in entrepreneurship education. A rich discussion is presented on the possibilities and conditions for an effective instructor-AI collaboration that can make an important contribution to entrepreneurship education, such as the curriculums, instruction, assessment, and feedback. An instructor-AI collaboration has the potential to improve pedagogical practices, learner motivation, and engagement, which are critical to achieving learning outcomes. The chapter concludes with the argument that while integrating AI in entrepreneurship education is capital intensive, it is worth investing in AI as it facilitates the progress of learners by providing them with customized learning support without unduly limiting individual choice. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022. All rights reserved.
12
SIGCSE 2022 - Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2[No author name available],2022SIGCSE 2022 - Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V.2
Conference Review
The proceedings contain 120 papers. The topics discussed include: Labtool: a command-line interface lab assistant and assessment tool
13
A Novel Machine Learning and Artificial Intelligence Course for Secondary School StudentsMahon J., Quille K., Mac Namee B., Becker B.A.,2022SIGCSE 2022 - Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V.2Conference Paper10.1145/3478432.3499073
We present an overview of a "Machine Learning and Artificial Intelligence"course that is part of a large online course platform for upper second level students. We take a novel approach to teaching fundamental AI concepts that does not require code, and assumes little prior knowledge including only basic mathematics. The design ethos is for students to gain an understanding of how algorithms can "learn". Many misconceptions exist about this term with respect to AI and can lead to confusion and more serious misconceptions, particularly for students who engage with AI-enabled tools regularly. This approach aims to provide insights into how AI actually works, to demystify and remove barriers to more advanced learning, and to emphasize the important roles of ethics and bias in AI. We took several steps to engage students, including videos narrated by a final-year second-level student (US 12th grade). We present design and logistics particulars on this course which is currently being taken by ∼7,000 students in Ireland. We believe this will be of value to other educators and the wider community. © 2022 Owner/Author.
14
Autonomous Ferries and Cargo Ships: Discovering Ethical Issues via a Challenge-Based Learning Approach in Higher EducationHerzog C., Leinweber N.-A., Engelhard S.A., Engelhard L.H.,2022International Symposium on Technology and Society, ProceedingsConference Paper
10.1109/ISTAS55053.2022.10227124
The ethics of autonomous vehicles continue to be discussed at length in both academia and with the general public. Even though on the level of principles, there is ample fruitful discourse, what is often missing is a discussion targeted at implementation specific ethical challenges. However, outcomes of these discussions could guide developers and stakeholders in advancing towards a thoroughly responsible design of these autonomous and intelligent systems. This contribution reports on the investigation of the ethics of autonomous, zero emission ferries and cargo ships in such a practical way, carried out via engaging university students with the issue during a challenge-based learning engineering ethics course at the University of Lübeck. Within this course, a three-way discourse has unfolded between student groups, supervisors and Unleash Future Boats, a company active in the field of autonomous, hydrogen-powered ferries and cargo ships. Not only do we present a framework for teaching engineering ethics that strives to equip future engineers with a working knowledge and methodology to use ethics as a productive and integrated tool for decision-making during business and engineering development. We also share preliminary insights into relevant and specific ethical challenges to be met when implementing autonomous ferries and cargo ships for inland navigation. © 2022 IEEE.
15
Practical Ethical Issues for Artificial Intelligence in EducationCórdova P.R., Vicari R.M.,2022Communications in Computer and Information ScienceConference Paper10.1007/978-3-031-22918-3_34
Due to the increasing use of Artificial Intelligence (AI) in Education, as well as in other areas, different ethical questions have been raised in recent years. Despite this, only a few practical proposals related to ethics in AI for Education can be found in scientific databases. For this reason, aiming to help fulfill this gap, this work proposes a solution in ethics by design for teaching and learning processes using a top-down approach for Artificial Moral Agents (AMA), following the assumptions defended by the Values Alignment (VA) in the AI area. Therefore, using the classic Beliefs, Desires, and Intentions (BDI) model, we propose an architecture that implements a hybrid solution applying both the utilitarian and the deontological ethical frameworks. Thus, while the deontological dimension of the agent will guide its behavior by means of ethical principles, its utilitarian dimension will help the AMA to solve ethical dilemmas. Whit this, it is expected to contribute to the development of a safer and more reliable AI for the Education area. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
16
Gender knowledge and Artificial IntelligenceBadaloni S., Rodà A.,2022CEUR Workshop ProceedingsConference Paper
Among the various types of biases that can be recognised in the behaviour of algorithms learning from data, gender-related biases assume particular importance in certain contexts, such as the Italian one, traditionally linked to a patriarchal vision of society. This becomes even more true considering the context of university education, where there is a strong under-representation of female students in STEM Faculties, and, particularly, in Computer Science Courses. After a brief review of gender biases reported in Machine Learning-based systems, the experience of the teaching “Gender Knowledge and Ethics in Artificial Intelligence” active since A.Y. 2021-22 at the School of Engineering of the University of Padova is presented. © 2022 Copyright for this paper by its authors.
17
Artificial intelligence and clinical anatomical education: Promises and perilsLazarus M.D., Truong M., Douglas P., Selwyn N.,2022Anatomical Sciences EducationArticle10.1002/ase.2221
Anatomy educators are often at the forefront of adopting innovative and advanced technologies for teaching, such as artificial intelligence (AI). While AI offers potential new opportunities for anatomical education, hard lessons learned from the deployment of AI tools in other domains (e.g., criminal justice, healthcare, and finance) suggest that these opportunities are likely to be tempered by disadvantages for at least some learners and within certain educational contexts. From the perspectives of an anatomy educator, public health researcher, medical ethicist, and an educational technology expert, this article examines five tensions between the promises and the perils of integrating AI into anatomy education. These tensions highlight the ways in which AI is currently ill-suited for incorporating the uncertainties intrinsic to anatomy education in the areas of (1) human variations, (2) healthcare practice, (3) diversity and social justice, (4) student support, and (5) student learning. Practical recommendations for a considered approach to working alongside AI in the contemporary (and future) anatomy education learning environment are provided, including enhanced transparency about how AI is integrated, AI developer diversity, inclusion of uncertainty and anatomical variations within deployed AI, provisions made for educator awareness of AI benefits and limitations, building in curricular “AI-free” time, and engaging AI to extend human capacities. These recommendations serve as a guiding framework for how the clinical anatomy discipline, and anatomy educators, can work alongside AI, and develop a more nuanced and considered approach to the role of AI in healthcare education. © 2022 The Authors. Anatomical Sciences Education published by Wiley Periodicals LLC on behalf of American Association for Anatomy.
18
The cyclical ethical effects of using artificial intelligence in educationDieterle E., Dede C., Walker M.,2022AI and SocietyArticle10.1007/s00146-022-01497-w
Our synthetic review of the relevant and related literatures on the ethics and effects of using AI in education reveals five qualitatively distinct and interrelated divides associated with access, representation, algorithms, interpretations, and citizenship. We open our analysis by probing the ethical effects of algorithms and how teams of humans can plan for and mitigate bias when using AI tools and techniques to model and inform instructional decisions and predict learning outcomes. We then analyze the upstream divides that feed into and fuel the algorithmic divide, first investigating access (who does and does not have access to the hardware, software, and connectivity necessary to engage with AI-enhanced digital learning tools and platforms) and then representation (the factors making data either representative of the total population or over-representative of a subpopulation’s preferences, thereby preventing objectivity and biasing understandings and outcomes). After that, we analyze the divides that are downstream of the algorithmic divide associated with interpretation (how learners, educators, and others understand the outputs of algorithms and use them to make decisions) and citizenship (how the other divides accumulate to impact interpretations of data by learners, educators, and others, in turn influencing behaviors and, over time, skills, culture, economic, health, and civic outcomes). At present, lacking ongoing reflection and action by learners, educators, educational leaders, designers, scholars, and policymakers, the five divides collectively create a vicious cycle and perpetuate structural biases in teaching and learning. However, increasing human responsibility and control over these divides can create a virtuous cycle that improves diversity, equity, and inclusion in education. We conclude the article by looking forward and discussing ways to increase educational opportunity and effectiveness for all by mitigating bias through a cycle of progressive improvement. © 2022, Educational Testing Service, under exclusive license to Springer-Verlag London Ltd., part of Springer Nature.
19
Where Is the AI? AI Literacy for EducatorsWilton L., Ip S., Sharma M., Fan F.,2022
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Conference Paper10.1007/978-3-031-11647-6_31
This paper responds to the emerging call from researchers of many disciplines (computer science, engineering, learning sciences, HCI community, education) to address the need for fostering AI literacy in those with or without technical backgrounds. There is an urgent need for research to support educators’ understandings of the potential challenges and opportunities surrounding the appropriate and responsible use of AI tools in formal education spaces. This contribution to the scholarly literature is based on three years of reflective data gathered from an author-instructor’s experiences of working with graduate students who identify and analyze AI applications in an introductory AIED course. The course was designed by the author-instructor to critically examine ethics, bias, privacy, inclusion, data collection and explainability in popular AIED tools. The emerging scholarship on AIED is reviewed to identify common understandings and justifications of AI literacy. Reflective data is shared to highlight the need for educators to better understand the implications of integrating AI applications into teaching. This article is intended to inspire the promotion of AI Literacy for educators (AILE) and to contribute to the development of meaningful AI literacy frameworks and guidelines. © 2022, Springer Nature Switzerland AG.
20
Ethical Considerations of Artificial Intelligence in Learning Analytics in Distance Education ContextsUngerer L., Slade S.,2022SpringerBriefs in Open and Distance EducationBook Chapter10.1007/978-981-19-0786-9_8
AI is seen as the future engine of education and many expect AI to significantly transform education and drastically alter teaching tools, learning approaches, access to knowledge and teacher training. It is increasingly impossible to think of learning analytics without AI. There are, however, several concerns surrounding the ethics of using AI in education. This chapter addresses selected ethical issues emerging from a range of examples pertaining to the use of AI in learning analytics such as (1) profiling and prediction
21
An Insight into Cultural Competence and Ethics in K-12 Artificial Intelligence EducationSanusi I.T., Olaleye S.A.,2022IEEE Global Engineering Education Conference, EDUCONConference Paper
10.1109/EDUCON52537.2022.9766818
As artificial intelligence education (AI) continues to be integrated into the mainstream educational system across countries, cultural competence and ethical considerations should be emphasized to ensure effective AI learning. Literatures has established that integrating elements of cultural competence within technology mediums has helped students understand difficult topics from computer science concepts learned in class. It is also argued that student with an ethical orientation of AI education is more likely to learn more about impacts and implications of AI. Hence, this study was conducted to understand how students' cultural competence, and ethics combine to influence AI content. We surveyed Nigerian high school students after an experimental teaching session. A total of 596 students provided useful responses for the analysis that was done using WarpLS software. We performed structural equation modelling to understand the relationship among the variables utilized in the study. The result shows that cultural competence and ethics significantly influence AI content. This study's results further shows that the association between ethics of AI and AI content has the highest predictive value which emphases the vital role of ethics in AI learning. This study also tested school location differences in the research model and discovered that urban students' perception is higher than their rural counterpart on the adopted variables in relation to AI content. Overall, the results suggest that stakeholders and educators should emphasize cultural elements and humanistic thinking as well as ethical considerations in the design of AI content and instructional materials. We discuss the findings and propose future directions. © 2022 IEEE.
22
Diversified Integration and Integrated Application of Ideological and Political Teaching Resources Based on Artificial Intelligence Model AlgorithmDing P., Huang X.,2022Wireless Communications and Mobile ComputingArticle10.1155/2022/2149733
In today's society, China's social market continues to develop, which has caused the whole society to attach great importance to the professional ethics of Chinese citizens and put forward new requirements and expectations for it. This paper aims to research and discuss the diversified integration and integrated application of ideological and political teaching resources based on artificial intelligence model algorithms. This paper first analyzes and discusses the feasibility of the integration and application of moral education resources in ideological and political textbooks. Political classrooms provide favorable support for the integration and application of moral education resources, and political classrooms have become a practical carrier for the integration and application of moral education resources. And then it introduces the artificial intelligence model algorithm-artificial neural network algorithm. Artificial intelligence is a new intelligent machine that can make intelligent responses similar to human actions according to human actions. Finally, it studies the use of social teaching resources for ideological and political education in city A and conducts a comprehensive analysis on the integration and utilization of moral education teaching resources for ideological and political courses in colleges and universities. In the survey of "Whether the introduction of local social resources in the political class of city A is helpful to students' learning,"84 teachers thought it was very helpful, 12 teachers thought it was helpful, and only 3 teachers thought it was not helpful. It can be found that most teachers think that it is helpful for students to introduce social resources in political lessons. At the same time, in order to better improve the reliability and validity of moral resources and to internalize social moral norms into students' hearts, it is only by combining moral resources with students' moral growth. © 2022 Peifen Ding and Xiang Huang.
23
Integrating Ethics and Career Futures with Technical Learning to Promote AI Literacy for Middle School Students: An Exploratory StudyZhang H., Lee I., Ali S., DiPaola D., Cheng Y., Breazeal C.,2022International Journal of Artificial Intelligence in EducationArticle10.1007/s40593-022-00293-3
The rapid expansion of artificial intelligence (AI) necessitates promoting AI education at the K-12 level. However, educating young learners to become AI literate citizens poses several challenges. The components of AI literacy are ill-defined and it is unclear to what extent middle school students can engage in learning about AI as a sociotechnical system with socio-political implications. In this paper we posit that students must learn three core domains of AI: technical concepts and processes, ethical and societal implications, and career futures in the AI era. This paper describes the design and implementation of the Developing AI Literacy (DAILy) workshop that aimed to integrate middle school students’ learning of the three domains. We found that after the workshop, most students developed a general understanding of AI concepts and processes (e.g., supervised learning and logic systems). More importantly, they were able to identify bias, describe ways to mitigate bias in machine learning, and start to consider how AI may impact their future lives and careers. At exit, nearly half of the students explained AI as not just a technical subject, but one that has personal, career, and societal implications. Overall, this finding suggests that the approach of incorporating ethics and career futures into AI education is age appropriate and effective for developing AI literacy among middle school students. This study contributes to the field of AI Education by presenting a model of integrating ethics into the teaching of AI that is appropriate for middle school students. © 2022, International Artificial Intelligence in Education Society.
24
Artificial intelligence and medical education: A global mixed-methods study of medical students’ perspectivesEjaz H., McGrath H., Wong B.L.H., Guise A., Vercauteren T., Shapey J.,2022Digital HealthArticle10.1177/20552076221089099
Objective: Medical students, as clinicians and healthcare leaders of the future, are key stakeholders in the clinical roll-out of artificial intelligence-driven technologies. The authors aim to provide the first report on the state of artificial intelligence in medical education globally by exploring the perspectives of medical students. Methods: The authors carried out a mixed-methods study of focus groups and surveys with 128 medical students from 48 countries. The study explored knowledge around artificial intelligence as well as what students wished to learn about artificial intelligence and how they wished to learn this. A combined qualitative and quantitative analysis was used. Results: Support for incorporating teaching on artificial intelligence into core curricula was ubiquitous across the globe, but few students had received teaching on artificial intelligence. Students showed knowledge on the applications of artificial intelligence in clinical medicine as well as on artificial intelligence ethics. They were interested in learning about clinical applications, algorithm development, coding and algorithm appraisal. Hackathon-style projects and multidisciplinary education involving computer science students were suggested for incorporation into the curriculum. Conclusions: Medical students from all countries should be provided teaching on artificial intelligence as part of their curriculum to develop skills and knowledge around artificial intelligence to ensure a patient-centred digital future in medicine. This teaching should focus on the applications of artificial intelligence in clinical medicine. Students should also be given the opportunity to be involved in algorithm development. Students in low- and middle-income countries require the foundational technology as well as robust teaching on artificial intelligence to ensure that they can drive innovation in their healthcare settings. © The Author(s) 2022.
25
Get out of the BAG! Silos in AI Ethics Education: Unsupervised Topic Modeling Analysis of Global AI CurriculaJaved R.T., Nasir O., Borit M., Vanhée L., Zea E., Gupta S., Vinuesa R., Qadir J.,2022Journal of Artificial Intelligence ResearchArticle10.1613/jair.1.13550
The domain of Artificial Intelligence (AI) ethics is not new, with discussions going back at least 40 years. Teaching the principles and requirements of ethical AI to students is considered an essential part of this domain, with an increasing number of technical AI courses taught at several higher-education institutions around the globe including content related to ethics. By using Latent Dirichlet Allocation (LDA), a generative probabilistic topic model, this study uncovers topics in teaching ethics in AI courses and their trends related to where the courses are taught, by whom, and at what level of cognitive complexity and specificity according to Bloom's taxonomy. In this exploratory study based on unsupervised machine learning, we analyzed a total of 166 courses: 116 from North American universities, 11 from Asia, 36 from Europe, and 10 from other regions. Based on this analysis, we were able to synthesize a model of teaching approaches, which we call BAG (Build, Assess, and Govern), that combines specific cognitive levels, course content topics, and disciplines affiliated with the department(s) in charge of the course. We critically assess the implications of this teaching paradigm and provide suggestions about how to move away from these practices. We challenge teaching practitioners and program coordinators to reflect on their usual procedures so that they may expand their methodology beyond the confines of stereotypical thought and traditional biases regarding what disciplines should teach and how. ©2022 AI Access Foundation. All rights reserved.
26
DIA4K12: Framework for Managing then Teaching-Learning of Artificial Intelligence at Early AgesLabanda-Jaramillo M., Chamba-Eras L., Erreyes-Pinzon D., Chamba-Eras I., Orellana-Malla A.,2022Lecture Notes in Networks and SystemsConference Paper10.1007/978-3-030-96293-7_36
Artificial Intelligence (AI) is intervening positively in educations. UNESCO considers as a new vision to involve AI not only as a didactic medium but also as a science in which children can develop their intellect, through workshops, courses, and curricula focused on the fundamentals of AI, allowing them to develop skills such as computational and critical thinking. This research aims to design the DIA4K12 framework, which proposes the structure to support the teaching-learning process of AI in primary and secondary education. The core of the framework consists of four phases: planning, execution, process, and development
27
Power to the Teachers: An Exploratory Review on Artificial Intelligence in EducationLameras P., Arnab S.,2022Information (Switzerland)Article10.3390/info13010014
This exploratory review attempted to gather evidence from the literature by shedding light on the emerging phenomenon of conceptualising the impact of artificial intelligence in education. The review utilised the PRISMA framework to review the analysis and synthesis process encompassing the search, screening, coding, and data analysis strategy of 141 items included in the corpus. Key findings extracted from the review incorporate a taxonomy of artificial intelligence applications with associated teaching and learning practice and a framework for helping teachers to develop and self-reflect on the skills and capabilities envisioned for employing artificial intelligence in education. Implications for ethical use and a set of propositions for enacting teaching and learning using artificial intelligence are demarcated. The findings of this review contribute to developing a better understanding of how artificial intelligence may enhance teachers’ roles as catalysts in designing, visualising, and orchestrating AI-enabled teaching and learning, and this will, in turn, help to proliferate AI-systems that render computational representations based on meaningful data-driven inferences of the pedagogy, domain, and learner models. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
28
Business methodology for the application in university environments of predictive machine learning models based on an ethical taxonomy of the student’s digital twin
Gallastegui L.M.G., Forradellas R.F.R.,2021Administrative SciencesArticle10.3390/admsci11040118
Educational institutions are undergoing an internal process of strategic transformation to adapt to the challenges caused by the growing impact of digitization and the continuous development of student and labor market expectations. Consequently, it is essential to obtain more accurate knowledge of students to improve their learning experience and their relationship with the educational institution, and in this way also contribute to evolving those students’ skills that will be useful in their next professional future. For this to happen, the entire academic community faces obstacles related to data capture, analysis, and subsequent activation. This article establishes a methodology to design, from a business point of view, the application in educational environments of predictive machine learning models based on Artificial Intelligence (AI), focusing on the student and their experience when interacting physically and emotionally with the educational ecosystem. This methodology focuses on the educational offer, relying on a taxonomy based on learning objects to automate the construction of analytical models. This methodology serves as a motivating backdrop to several challenges facing educational institutions, such as the exciting crossroads of data fusion and the ethics of data use. Our ultimate goal is to encourage education experts and practitioners to take full advantage of applying this methodology to make data-driven decisions without any preconceived bias due to the lack of contrasting information. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
29
Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative surveyZheng B., Wu M.-N., Zhu S.-J., Zhou H.-X., Hao X.-L., Fei F.-Q., Jia Y., Wu J., Yang W.-H., Pan X.-P.,2021BMC Health Services ResearchArticle10.1186/s12913-021-07044-5
Background: In the development of artificial intelligence in ophthalmology, the ophthalmic AI-related recognition issues are prominent, but there is a lack of research into people’s familiarity with and their attitudes toward ophthalmic AI. This survey aims to assess medical workers’ and other professional technicians’ familiarity with, attitudes toward, and concerns about AI in ophthalmology. Methods: This is a cross-sectional study design study. An electronic questionnaire was designed through the app Questionnaire Star, and was sent to respondents through WeChat, China’s version of Facebook or WhatsApp. The participation was voluntary and anonymous. The questionnaire consisted of four parts, namely the respondents’ background, their basic understanding of AI, their attitudes toward AI, and their concerns about AI. A total of 562 respondents were counted, with 562 valid questionnaires returned. The results of the questionnaires are displayed in an Excel 2003 form. Results: There were 291 medical workers and 271 other professional technicians completed the questionnaire. About 1/3 of the respondents understood AI and ophthalmic AI. The percentages of people who understood ophthalmic AI among medical workers and other professional technicians were about 42.6 % and 15.6 %, respectively. About 66.0 % of the respondents thought that AI in ophthalmology would partly replace doctors, about 59.07 % having a relatively high acceptance level of ophthalmic AI. Meanwhile, among those with AI in ophthalmology application experiences (30.6 %), above 70 % of respondents held a full acceptance attitude toward AI in ophthalmology. The respondents expressed medical ethics concerns about AI in ophthalmology. And among the respondents who understood AI in ophthalmology, almost all the people said that there was a need to increase the study of medical ethics issues in the ophthalmic AI field. Conclusions: The survey results revealed that the medical workers had a higher understanding level of AI in ophthalmology than other professional technicians, making it necessary to popularize ophthalmic AI education among other professional technicians. Most of the respondents did not have any experience in ophthalmic AI but generally had a relatively high acceptance level of AI in ophthalmology, and there was a need to strengthen research into medical ethics issues. © 2021, The Author(s).
30
Good Proctor or “Big Brother”? Ethics of Online Exam Supervision TechnologiesCoghlan S., Miller T., Paterson J.,2021Philosophy and TechnologyArticle10.1007/s13347-021-00476-1
Online exam supervision technologies have recently generated significant controversy and concern. Their use is now booming due to growing demand for online courses and for off-campus assessment options amid COVID-19 lockdowns. Online proctoring technologies purport to effectively oversee students sitting online exams by using artificial intelligence (AI) systems supplemented by human invigilators. Such technologies have alarmed some students who see them as a “Big Brother-like” threat to liberty and privacy, and as potentially unfair and discriminatory. However, some universities and educators defend their judicious use. Critical ethical appraisal of online proctoring technologies is overdue. This essay provides one of the first sustained moral philosophical analyses of these technologies, focusing on ethical notions of academic integrity, fairness, non-maleficence, transparency, privacy, autonomy, liberty, and trust. Most of these concepts are prominent in the new field of AI ethics, and all are relevant to education. The essay discusses these ethical issues. It also offers suggestions for educational institutions and educators interested in the technologies about the kinds of inquiries they need to make and the governance and review processes they might need to adopt to justify and remain accountable for using online proctoring technologies. The rapid and contentious rise of proctoring software provides a fruitful ethical case study of how AI is infiltrating all areas of life. The social impacts and moral consequences of this digital technology warrant ongoing scrutiny and study. © 2021, The Author(s), under exclusive licence to Springer Nature B.V.
31
Moral Awareness of College Students Regarding Artificial IntelligenceGhotbi N., Ho M.T.,2021Asian Bioethics ReviewArticle10.1007/s41649-021-00182-2
To evaluate the moral awareness of college students regarding artificial intelligence (AI) systems, we have examined 467 surveys collected from 152 Japanese and 315 non-Japanese students in an international university in Japan. The students were asked to choose a most significant moral problem of AI applications in the future from a list of ten ethical issues and to write an essay about it. The results show that most of the students (n = 269, 58%) considered unemployment to be the major ethical issue related to AI. The second largest group of students (n = 54, 12%) was concerned with ethical issues related to emotional AI, including the impact of AI on human behavior and emotion and robots’ rights and emotions. A relatively small number of students referred to the risk of social control by AI (6%), AI discrimination (6%), increasing inequality (5%), loss of privacy (4%), AI mistakes (3%), malicious AI (3%), and AI security breaches (3%). Calculation of the z score for two population proportions shows that Japanese students were much less concerned about AI control of society (− 3.1276, p < 0.01) than non-Japanese students, but more concerned about discrimination (2.2757, p < 0.05). Female students were less concerned about unemployment (− 2.6108, p < 0.01) than males, but more concerned about discrimination (2.4333, p < 0.05). The study concludes that the moral awareness of college students regarding AI technologies is quite limited and recommends including the ethics of AI in the curriculum. © 2021, National University of Singapore and Springer Nature Singapore Pte Ltd.
32
The PICASO cloud platform for improved holistic care in rheumatoid arthritis treatment—experiences of patients and cliniciansRichter J.G., Chehab G., Schwartz C., Ricken E., Tomczak M., Acar H., Gappa H., Velasco C.A., Rosengren P., Povilionis A., Schneider M., Thestrup J.,2021Arthritis Research and TherapyArticle10.1186/s13075-021-02526-7
Background: Multimorbidity raises the number of essential information needed for delivery of high-quality care in patients with chronic diseases like rheumatoid arthritis (RA). We evaluated an innovative ICT platform for integrated care which orchestrates data from various health care providers to optimize care management processes. Methods: The Horizon2020-funded research project PICASO (picaso-project.eu) established an ICT platform that offers integration of care services across providers and supports patients’ management along the continuum of care, leaving the data with the owner. Strict conformity with ethical and legal legislations was augmented with a usability-driven engineering process, user requirements gathering from relevant stakeholders, and expert walkthroughs guided developments. Developments based on the HL7/FHIR standard granting interoperability. Platform’s applicability in clinical routine was an essential aim. Thus, we evaluated the platform according to an evaluation framework in an observational 6-month proof-of-concept study with RA patients affected by cardiovascular comorbidities using questionnaires, interviews, and platform data. Results: Thirty RA patients (80% female) participated, mean age 59 years, disease duration 13 years, average number of comorbidities 2.9. Home monitoring data demonstrated high platform adherence. Evaluations yielded predominantly positive feedback: The innovative dashboard-like design offering time-efficient data visualization, comprehension, and personalization was well accepted, i.e., patients rated the platform “overall” as 2.3 (1.1) (mean (SD), Likert scales 1–6) and clinicians recommended further platform use for 93% of their patients. They managed 86% of patients’ visits using the clinician dashboard. Dashboards were valued for a broader view of health status and patient-physician interactions. Platform use contributed to improved disease and comorbidity management (i.e., in 70% physicians reported usefulness to assess patients’ diseases and in 33% potential influence on treatment decisions
33
Educational Interventions for Children and Youth with Autism: A 40-Year PerspectiveOdom S.L., Hall L.J., Morin K.L., Kraemer B.R., Hume K.A., McIntyre N.S., Nowell S.W., Steinbrenner J.R., Tomaszewski B., Sam A.M., DaWalt L.,2021Journal of Autism and Developmental DisordersReview10.1007/s10803-021-04990-1
Commemorating the 40 th anniversary of the Diagnostic and Statistical Manual (DSM) III, the purpose of this commentary is to describe school-based and school-relevant interventions and instructional approaches for children and youth with autism that have been developed and employed during that time period. The commentary begins with a brief description of foundational research that provides an historical context. Research themes shaped by science, ethics, social policy, and the changes in the DSM provide an organization for describing the evolution of intervention and instructional practices over the four previous decades. The commentary concludes with a discussion of school-contextual variables that influence implementation and the promise of the “iSciences” for closing the research to practice gap in the future. © 2021, The Author(s).
34
Medical artificial intelligence readiness scale for medical students (MAIRS-MS) – developmentKaraca O., Çalışkan S.A., Demir K.,2021BMC Medical EducationArticle10.1186/s12909-021-02546-6
Background: It is unlikely that applications of artificial intelligence (AI) will completely replace physicians. However, it is very likely that AI applications will acquire many of their roles and generate new tasks in medical care. To be ready for new roles and tasks, medical students and physicians will need to understand the fundamentals of AI and data science, mathematical concepts, and related ethical and medico-legal issues in addition with the standard medical principles. Nevertheless, there is no valid and reliable instrument available in the literature to measure medical AI readiness. In this study, we have described the development of a valid and reliable psychometric measurement tool for the assessment of the perceived readiness of medical students on AI technologies and its applications in medicine. Methods: To define medical students’ required competencies on AI, a diverse set of experts’ opinions were obtained by a qualitative method and were used as a theoretical framework, while creating the item pool of the scale. Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were applied. Results: A total of 568 medical students during the EFA phase and 329 medical students during the CFA phase, enrolled in two different public universities in Turkey participated in this study. The initial 27-items finalized with a 22-items scale in a four-factor structure (cognition, ability, vision, and ethics), which explains 50.9% cumulative variance that resulted from the EFA. Cronbach’s alpha reliability coefficient was 0.87. CFA indicated appropriate fit of the four-factor model (χ2/df = 3.81, RMSEA = 0.094, SRMR = 0.057, CFI = 0.938, and NNFI (TLI) = 0.928). These values showed that the four-factor model has construct validity. Conclusions: The newly developed Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) was found to be valid and reliable tool for evaluation and monitoring of perceived readiness levels of medical students on AI technologies and applications. Medical schools may follow ‘a physician training perspective that is compatible with AI in medicine’ to their curricula by using MAIRS-MS. This scale could be benefitted by medical and health science education institutions as a valuable curriculum development tool with its learner needs assessment and participants’ end-course perceived readiness opportunities. © 2021, The Author(s).
35
Artificial Intelligence in Undergraduate Medical Education: A Scoping ReviewLee J., Wu A.S., Li D., Kulasegaram K.M.,2021Academic medicine : journal of the Association of American Medical CollegesReview
10.1097/ACM.0000000000004291
PURPOSE: Artificial intelligence (AI) is a rapidly growing phenomenon poised to instigate large-scale changes in medicine. However, medical education has not kept pace with the rapid advancements of AI. Despite several calls to action, the adoption of teaching on AI in undergraduate medical education (UME) has been limited. This scoping review aims to identify gaps and key themes in the peer-reviewed literature on AI training in UME. METHOD: The scoping review was informed by Arksey and O'Malley's methodology. Seven electronic databases including MEDLINE and EMBASE were searched for articles discussing the inclusion of AI in UME between January 2000 and July 2020. A total of 4,299 articles were independently screened by 3 co-investigators and 22 full-text articles were included. Data were extracted using a standardized checklist. Themes were identified using iterative thematic analysis. RESULTS: The literature addressed: (1) a need for an AI curriculum in UME, (2) recommendations for AI curricular content including machine learning literacy and AI ethics, (3) suggestions for curriculum delivery, (4) an emphasis on cultivating "uniquely human skills" such as empathy in response to AI-driven changes, and (5) challenges with introducing an AI curriculum in UME. However, there was considerable heterogeneity and poor consensus across studies regarding AI curricular content and delivery. CONCLUSIONS: Despite the large volume of literature, there is little consensus on what and how to teach AI in UME. Further research is needed to address these discrepancies and create a standardized framework of competencies that can facilitate greater adoption and implementation of a standardized AI curriculum in UME. Copyright © 2021 by the Association of American Medical Colleges.
36
An international survey on AI in radiology in 1041 radiologists and radiology residents part 2: expectations
Huisman M., Ranschaert E., Parker W., Mastrodicasa D., Koci M., Pinto de Santos D., Coppola F., Morozov S., Zins M., Bohyn C., Koç U., Wu J., Veean S., Fleischmann D., Leiner T.,
Willemink M.J.,
2021European RadiologyArticle10.1007/s00330-021-07782-4
Objectives: Currently, hurdles to implementation of artificial intelligence (AI) in radiology are a much-debated topic but have not been investigated in the community at large. Also, controversy exists if and to what extent AI should be incorporated into radiology residency programs. Methods: Between April and July 2019, an international survey took place on AI regarding its impact on the profession and training. The survey was accessible for radiologists and residents and distributed through several radiological societies. Relationships of independent variables with opinions, hurdles, and education were assessed using multivariable logistic regression. Results: The survey was completed by 1041 respondents from 54 countries. A majority (n = 855, 82%) expects that AI will cause a change to the radiology field within 10 years. Most frequently, expected roles of AI in clinical practice were second reader (n = 829, 78%) and work-flow optimization (n = 802, 77%). Ethical and legal issues (n = 630, 62%) and lack of knowledge (n = 584, 57%) were mentioned most often as hurdles to implementation. Expert respondents added lack of labelled images and generalizability issues. A majority (n = 819, 79%) indicated that AI should be incorporated in residency programs, while less support for imaging informatics and AI as a subspecialty was found (n = 241, 23%). Conclusions: Broad community demand exists for incorporation of AI into residency programs. Based on the results of the current study, integration of AI education seems advisable for radiology residents, including issues related to data management, ethics, and legislation. Key Points: • There is broad demand from the radiological community to incorporate AI into residency programs, but there is less support to recognize imaging informatics as a radiological subspecialty. • Ethical and legal issues and lack of knowledge are recognized as major bottlenecks for AI implementation by the radiological community, while the shortage in labeled data and IT-infrastructure issues are less often recognized as hurdles. • Integrating AI education in radiology curricula including technical aspects of data management, risk of bias, and ethical and legal issues may aid successful integration of AI into diagnostic radiology. © 2021, The Author(s).
37
Blending machinesMouta A., Pinto Llorente A.M., Torrecilla Sánchez E.M.,2021ACM International Conference Proceeding SeriesConference Paper10.1145/3486011.3486545
The latest technological advancements emerging as daily commodities are so far-reaching that our ways of thinking, feeling, acting, and relate to others may be transformed at a very silent and fast pace. Even if we restrict the context to Artificial Intelligence in Education (AIEd), what once were just fictional displays are rapidly becoming a reality. But which society layers and ethical frameworks are being considered in the process of conceiving AIEd scope? Given this context, this research aims at exploring ethical challenges of AIEd in terms of sense of agency development across formal education. It uses a mixed method approach, starting with an expert consultation though the Delphi Method. The results of its iterations are presented in this paper. Further on, its conclusions will enable the implementation of a focus group with teachers that will be the basis for the selection of a syllabus for an eLearning course on AIEd. The final insights of this research are expected to mainly reinforce understanding on AI applied to Education, contributing to the public interest, debate, understanding, and further research on AIEd, through the development of theoretical frameworks to analyze and incorporate its critical dimensions into deliberate pedagogical practices. This paper presents the context and motivation that drives the dissertation research, it follows with a state-of-the-art, the problem statement, the research goals and methods, the results to date, and finally some of its expected contributions. © 2021 ACM.
38
Comment on Starke et al.: 'Computing schizophrenia: Ethical challenges for machine learning in psychiatry': From machine learning to student learning: Pedagogical challenges for
psychiatry
Gauld C., Micoulaud-Franchi J.-A., Dumas G.,2021Psychological MedicineLetter10.1017/S0033291720003906[No abstract available]
39
A model of symbiomemesis: Machine education and communication as pillars for human-autonomy symbiosisAbbass H., Petraki E., Hussein A., McCall F., Elsawah S.,2021Philosophical Transactions of the Royal Society A: MathematicalArticle
Symbiosis is a physiological phenomenon where organisms of different species develop social interdependencies through partnerships. Artificial agents need mechanisms to build their capacity to develop symbiotic relationships. In this paper, we discuss two pillars for these mechanisms: machine education (ME) and bi-directional communication. ME is a new revolution in artificial intelligence (AI) which aims at structuring the learning journey of AI-enabled autonomous systems. In addition to the design of a systematic curriculum, ME embeds the body of knowledge necessary for the social integration of AI, such as ethics, moral values and trust, into the evolutionary design and learning of the AI. ME promises to equip AI with skills to be ready to develop logic-based symbiosis with humans and in a manner that leads to a trustworthy and effective steady-state through the mental interaction between humans and autonomy
40
AI and Ethics: Ethical and Educational Perspectives for LISHuang C., Samek T., Shiri A.,2021Journal of Education for Library and Information ScienceArticle10.3138/jelis-62-4-2020-0106
The growth of artificial intelligence (AI) technologies has affected higher education in a dramatic way, shifting the norms of teaching and learning. With these shifts come major ethical questions relating to surveillance, exacerbated social inequality, and threats to job security. This article overviews some of the discourses that are developing on the integration of AI into the higher education setting, with focus on LIS and librarianship, considers the role of LIS and librarianship in intervening in the trajectory of AI in learning and teaching, and weighs in on the place of professional LIS ethics in relation to confronting AI-led technological transformations. © Journal of Education for Library and Information Science 2021
41
Chatbot for Health Care and Oncology Applications Using Artificial Intelligence and Machine Learning: Systematic ReviewXu L., Sanders L., Li K., Chow J.C.L.,2021JMIR CancerArticle10.2196/27850
Background: Chatbot is a timely topic applied in various fields, including medicine and health care, for human-like knowledge transfer and communication. Machine learning, a subset of artificial intelligence, has been proven particularly applicable in health care, with the ability for complex dialog management and conversational flexibility. Objective: This review article aims to report on the recent advances and current trends in chatbot technology in medicine. A brief historical overview, along with the developmental progress and design characteristics, is first introduced. The focus will be on cancer therapy, with in-depth discussions and examples of diagnosis, treatment, monitoring, patient support, workflow efficiency, and health promotion. In addition, this paper will explore the limitations and areas of concern, highlighting ethical, moral, security, technical, and regulatory standards and evaluation issues to explain the hesitancy in implementation. Methods: A search of the literature published in the past 20 years was conducted using the IEEE Xplore, PubMed, Web of Science, Scopus, and OVID databases. The screening of chatbots was guided by the open-access Botlist directory for health care components and further divided according to the following criteria: diagnosis, treatment, monitoring, support, workflow, and health promotion. Results: Even after addressing these issues and establishing the safety or efficacy of chatbots, human elements in health care will not be replaceable. Therefore, chatbots have the potential to be integrated into clinical practice by working alongside health practitioners to reduce costs, refine workflow efficiencies, and improve patient outcomes. Other applications in pandemic support, global health, and education are yet to be fully explored. Conclusions: Further research and interdisciplinary collaboration could advance this technology to dramatically improve the quality of care for patients, rebalance the workload for clinicians, and revolutionize the practice of medicine. © Lu Xu, Leslie Sanders, Kay Li, James C L Chow.
42
Accelerating the appropriate adoption of artificial intelligence in health care: Protocol for a multistepped approach
Wiljer D., Salhia M., Dolatabadi E., Dhalla A., Gillan C., Al-Mouaswas D., Jackson E., Waldorf J., Mattson J., Clare M., Lalani N., Charow R., Balakumar S., Younus S., Jeyakumar T.,
Peteanu W., Tavares W.,
2021JMIR Research ProtocolsReview10.2196/30940
Background: Significant investments and advances in health care technologies and practices have created a need for digital and data-literate health care providers. Artificial intelligence (AI) algorithms transform the analysis, diagnosis, and treatment of medical conditions. Complex and massive data sets are informing significant health care decisions and clinical practices. The ability to read, manage, and interpret large data sets to provide data-driven care and to protect patient privacy are increasingly critical skills for today's health care providers. Objective: The aim of this study is to accelerate the appropriate adoption of data-driven and AI-enhanced care by focusing on the mindsets, skillsets, and toolsets of point-of-care health providers and their leaders in the health system. Methods: To accelerate the adoption of AI and the need for organizational change at a national level, our multistepped approach includes creating awareness and capacity building, learning through innovation and adoption, developing appropriate and strategic partnerships, and building effective knowledge exchange initiatives. Education interventions designed to adapt knowledge to the local context and address any challenges to knowledge use include engagement activities to increase awareness, educational curricula for health care providers and leaders, and the development of a coaching and practice-based innovation hub. Framed by the Knowledge-to-Action framework, we are currently in the knowledge creation stage to inform the curricula for each deliverable. An environmental scan and scoping review were conducted to understand the current state of AI education programs as reported in the academic literature. Results: The environmental scan identified 24 AI-accredited programs specific to health providers, of which 11 were from the United States, 6 from Canada, 4 from the United Kingdom, and 3 from Asian countries. The most common curriculum topics across the environmental scan and scoping review included AI fundamentals, applications of AI, applied machine learning in health care, ethics, data science, and challenges to and opportunities for using AI. Conclusions: Technologies are advancing more rapidly than organizations, and professionals can adopt and adapt to them. To help shape AI practices, health care providers must have the skills and abilities to initiate change and shape the future of their discipline and practices for advancing high-quality care within the digital ecosystem. © 2021 Fundacion Instituto de Historia Social. All rights reserved.
43
The need for health AI ethics in medical school educationKatznelson G., Gerke S.,2021Advances in Health Sciences EducationArticle10.1007/s10459-021-10040-3
Health Artificial Intelligence (AI) has the potential to improve health care, but at the same time, raises many ethical challenges. Within the field of health AI ethics, the solutions to the questions posed by ethical issues such as informed consent, bias, safety, transparency, patient privacy, and allocation are complex and difficult to navigate. The increasing amount of data, market forces, and changing landscape of health care suggest that medical students may be faced with a workplace in which understanding how to safely and effectively interact with health AIs will be essential. Here we argue that there is a need to teach health AI ethics in medical schools. Real events in health AI already pose ethical challenges to the medical community. We discuss key ethical issues requiring medical school education and suggest that case studies based on recent real-life examples are useful tools to teach the ethical issues raised by health AIs. © 2021, The Author(s), under exclusive licence to Springer Nature B.V. part of Springer Nature.
44
Identifying RolesBarclay I., Abramson W.,2021UbiComp/ISWC 2021 - Adjunct Proceedings of the 2021 ACM International
Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the
2021 ACM International Symposium on Wearable Computers
Conference Paper10.1145/3460418.3479344
Artificial Intelligence (AI) systems are being deployed around the globe in critical fields such as healthcare and education. In some cases, expert practitioners in these domains are being tasked with introducing or using such systems, but have little or no insight into what data these complex systems are based on, or how they are put together. In this paper, we consider an AI system from the domain practitioner's perspective and identify key roles that are involved in system deployment. We consider the differing requirements and responsibilities of each role, and identify tensions between transparency and confidentiality that need to be addressed so that domain practitioners are able to intelligently assess whether a particular AI system is appropriate for use in their domain. © 2021 ACM.
45
A high-level overview of AI ethicsKazim E., Koshiyama A.S.,2021PatternsReview10.1016/j.patter.2021.100314
Artificial intelligence (AI) ethics is a field that has emerged as a response to the growing concern regarding the impact of AI. It can be read as a nascent field and as a subset of the wider field of digital ethics, which addresses concerns raised by the development and deployment of new digital technologies, such as AI, big data analytics, and blockchain technologies. The principle aim of this article is to provide a high-level conceptual discussion of the field by way of introducing basic concepts and sketching approaches and central themes in AI ethics. The first part introduces concepts by noting what is being referred to by “AI” and “ethics”, etc.
46
Assessment of the quality management system for clinical nutrition in jiangsu: Survey studyWang J., Pan C., Ma X.,2021JMIR Formative ResearchArticle10.2196/27285
Background: An electronic system that automatically collects medical information can realize timely monitoring of patient health and improve the effectiveness and accuracy of medical treatment. To our knowledge, the application of artificial intelligence (AI) in medical service quality assessment has been minimally evaluated, especially for clinical nutrition departments in China. From the perspective of medical ethics, patient safety comes before any other factors within health science, and this responsibility belongs to the quality management system (QMS) within medical institutions. Objective: This study aims to evaluate the QMS for clinical nutrition in Jiangsu, monitor its performance in quality assessment and human resource management from a nutrition aspect, and investigate the application and development of AI in medical quality control. Methods: The participants for this study were the staff of 70 clinical nutrition departments of the tertiary hospitals in Jiangsu Province, China. These departments are all members of the Quality Management System of Clinical Nutrition in Jiangsu (QMSNJ). An online survey was conducted on all 341 employees within all clinical nutrition departments based on the staff information from the surveyed medical institutions. The questionnaire contains five sections, and the data analysis and AI evaluation were focused on human resource information. Results: A total of 330 questionnaires were collected, with a response rate of 96.77% (330/341). A QMS for clinical nutrition was built for clinical nutrition departments in Jiangsu and achieved its target of human resource improvements, especially among dietitians. The growing number of participating departments (an increase of 42.8% from 2018 to 2020) and the significant growth of dietitians (t93.4= 0.42
47
AI ethicsNourbakhsh I.R.,2021Communications of the ACMReview10.1145/3478516
Integrating ethics into artificial intelligence (AI) education and development is key to its widespread use and further improvement. In working groups and meetings spanning IEEE, ACM, UN and the World Economic Forum as well as a few governmental advisory committees, more intimate breakout sessions afford an opportunity to observe how we, as robotics and AI researchers, communicate the own relationship to ethics within a field teeming with possibilities of both benefit and harm. The IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems, led by John Havens, continues to make progress on international standards regarding the ethical application of robotics and AI. These are two of dozens of ongoing international efforts. Curricular experiments have also garnered successful publication, ranging from single-course pilots to whole-curricular interventions across required course sequences.
48
Poor quality dataBellini V., Montomoli J., Bignami E.,2021Intensive Care MedicineLetter10.1007/s00134-021-06473-4[No abstract available]
49
Effectiveness of technology-based interventions in detectionMartínez-Soldevila J., Pastells-Peiró R., Climent-Sanz C., Piñol-Ripoll G., Rocaspana-García M., Gea-Sánchez M.,2021BMJ OpenReview10.1136/bmjopen-2020-045978
Introduction The gradual changes over the decades in the longevity and ageing of European society as a whole can be directly related to the prolonged decline in the birth rate and increase in the life expectancy. According to the WHO, there is an increased risk of dementia or other cognitive disorders as the population ages, which have a major impact on public health. Mild cognitive impairment (MCI) is described as a greater than expected cognitive decline for an individual's age and level of education, but that does not significantly interfere with activities of daily living. Patients with MCI exhibit a higher risk of dementia compared with others in the same age group, but without a cognitive decline, have impaired walking and a 50% greater risk of falling. The urban lifestyle and advent of smartphones, mobility and immediate access to all information via the internet, including health information, has led to a totally disruptive change in most general aspects. This systematic review protocol is aimed at evaluating the effectiveness of technology-based interventions in the detection, prevention, monitoring and treatment of patients at risk or diagnosed with MCI. Methods and analysis This review protocol follows the recommendations of the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols reporting guidelines. The search will be performed on MEDLINE (PubMed), CENTRAL, CINAHL Plus, ISI Web of Science and Scopus databases from 2010 to 2020. Studies of interventions either randomised clinical trials or pre-post non-randomised quasi-experimental designs, published in English and Spanish will be included. Articles that provide relevant information on the use of technology and its effectiveness in interventions that assess improvements in early detection, prevention, follow-up and treatment of the patients at risk or diagnosed with MCI will be included. Ethics and dissemination Ethics committee approval not required. The results will be disseminated in publications and congresses. © 2021 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
50
The integration development of artificial intelligence and educationLi Y., Li S., Wang L.,2021ICCSE 2021 - IEEE 16th International Conference on Computer Science and EducationConference Paper
10.1109/ICCSE51940.2021.9569551
With the rapid progress and development of modern information science and technology, artificial intelligence technology has become more and more extensive in many fields. How to incorporate artificial intelligence into education has become a hot topic of the whole society. In this paper, analysis of artificial intelligence used to extract application potential and value of intelligent correction, real-time monitoring, education fairness and campus safety. But there are also challenges in personality education, safety ethics, teaching efficiency, etc. In order to make artificial intelligence better serve the education industry, it is necessary to increase the infrastructure construction and environment configuration of artificial intelligence equipment. And then improving the education practitioners' awareness and correct cognition of the relationship between intelligent machine safety ethics and artificial intelligence. © 2021 IEEE.
51
Digital Health during COVID-19: Informatics Dialogue with the World Health OrganizationKoch S., Hersh W.R., Bellazzi R., Leong T.Y., Yedaly M., Al-Shorbaji N.,2021Yearbook of medical informaticsConference Paper10.1055/s-0041-1726480
BACKGROUND: On December 16, 2020 representatives of the International Medical Informatics Association (IMIA), a Non-Governmental Organization in official relations with the World Health Organization (WHO), along with its International Academy for Health Sciences Informatics (IAHSI), held an open dialogue with WHO Director General (WHO DG) Tedros Adhanom Ghebreyesus about the opportunities and challenges of digital health during the COVID-19 global pandemic. OBJECTIVES: The aim of this paper is to report the outcomes of the dialogue and discussions with more than 200 participants representing different civil society organizations (CSOs). METHODS: The dialogue was held in form of a webinar. After an initial address of the WHO DG, short presentations by the panelists, and live discussions between panelists, the WHO DG and WHO representatives took place. The audience was able to post questions in written. These written discussions were saved with participants' consent and summarized in this paper. RESULTS: The main themes that were brought up by the audience for discussion were: (a) opportunities and challenges in general
52
An AI Ethics Course Highlighting Explicit Ethical AgentsGreen N.,2021AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AIConference Paper
This is an experience report describing a pilot AI Ethics course for undergraduate computer science majors. In addition to teaching students about different ethical approaches and using them to analyze ethical issues, the course covered how ethics has been incorporated into the implementation of explicit ethical agents, and required students to implement an explicit ethical agent for a simple application. This report describes the course objectives and design, the topics covered, and a qualitative evaluation with suggestions for future offerings of the courses. © 2021 ACM.
53
54
Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination AlgorithmsPandey A., Caliskan A.,2021AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AIConference Paper
Ridehailing applications that collect mobility data from individuals to inform smart city planning predict each trip's fare pricing with automated algorithms that rely on artificial intelligence (AI). This type of AI algorithm, namely a price discrimination algorithm, is widely used in the industry's black box systems for dynamic individualized pricing. Lacking transparency, studying such AI systems for fairness and disparate impact has not been possible without access to data used in generating the outcomes of price discrimination algorithms. Recently, in an effort to enhance transparency in city planning, the city of Chicago regulation mandated that transportation providers publish anonymized data on ridehailing. As a result, we present the first large-scale measurement of the disparate impact of price discrimination algorithms used by ridehailing applications. The application of random effects models from the meta-analysis literature combines the city-level effects of AI bias on fare pricing from census tract attributes, aggregated from the American Community Survey. An analysis of 100 million ridehailing samples from the city of Chicago indicates a significant disparate impact in fare pricing of neighborhoods due to AI bias learned from ridehailing utilization patterns associated with demographic attributes. Neighborhoods with larger non-white populations, higher poverty levels, younger residents, and high education levels are significantly associated with higher fare prices, with combined effect sizes, measured in Cohen's d, of -0.32, -0.28, 0.69, and 0.24 for each demographic, respectively. Further, our methods hold promise for identifying and addressing the sources of disparate impact in AI algorithms learning from datasets that contain U.S. geolocations. © 2021 Owner/Author.
55
Digital Scientist 2035—An Outlook on Innovation and EducationBarbazzeni B., Friebe M.,2021Frontiers in Computer ScienceArticle10.3389/fcomp.2021.710972
With the advent of the fourth industrial revolution accompanied by the Internet of Things, the implementation of smart technologies and digitalization already had a great impact in our society, especially when considering exponential innovation and human development. In this context, some types of employment have already been replaced or have been enhanced by the use of robots, human-machines interfaces and Artificial Intelligence systems. And there is likely more to come. If innovation can be viewed as a direct or indirect outcome of scientific research, which role will a scientist play in 2035? We developed a survey to investigate the opinions of scientists with respect to the possible future implementation of disruptive technologies, their feelings and approaches to digitalization, and particularly the impact of digital transformation on scientific education. In a futuristic scenario, we can imagine that scientists will be supported by technologies, carrying out numerous experiments, managing big datasets, producing accurate results, increasing communication, openness and collaboration among the worldwide scientific community, where ethics, regulations and social norms will always be observed. The new era of Digital Science is coming, in which humans will start to incorporate more disruptive and advanced technologies into their daily life
56
The routledge social science handbook of AIElliott A.,2021The Routledge Social Science Handbook of AIBook10.4324/9780429198533
The Routledge Social Science Handbook of AI is a landmark volume providing students and teachers with a comprehensive and accessible guide to the major topics and trends of research in the social sciences of artificial intelligence (AI), as well as surveying how the digital revolution - from supercomputers and social media to advanced automation and robotics - is transforming society, culture, politics and economy. The Handbook provides representative coverage of the full range of social science engagements with the AI revolution, from employment and jobs to education and new digital skills to automated technologies of military warfare and the future of ethics. The reference work is introduced by editor Anthony Elliott, who addresses the question of relationship of social sciences to artificial intelligence, and who surveys various convergences and divergences between contemporary social theory and the digital revolution. The Handbook is exceptionally wide-ranging in span, covering topics all the way from AI technologies in everyday life to single-purpose robots throughout home and work life, and from the mainstreaming of human-machine interfaces to the latest advances in AI, such as the ability to mimic (and improve on) many aspects of human brain function. A unique integration of social science on the one hand and new technologies of artificial intelligence on the other, this Handbook offers readers new ways of understanding the rise of AI and its associated global transformations. Written in a clear and direct style, the Handbook will appeal to a wide undergraduate audience. © 2022 selection and editorial matter, Anthony Elliott
57
Transhumanism: From Julian Huxley to UNESCO what objective for international action?Byk C.,2021JahrReview10.21860/J.12.1.8
Julian Huxley, founder and the first Director-General of UNESCO, is at the heart of contemporary debates on the nature and objectives of the concept of transhumanism, which he first used in the early 1950s. Therefore, the analysis of his idea of transhumanism - a tool to improve the quality of life and the condition of man - should lead us to question his heritage in terms of philosophy that inspires UNESCO's action as it seeks to build a comprehensive approach to artificial intelligence that takes into account, among other things, the values and principles of universal ethics and aims to derive the best from the use of this technology. This title where the British biologist, the elder brother of the famous science fiction writer, Aldous Huxley, author of the Brave New World1, coexists with the United Nations Organization in charge of Education of Science and Culture is obvious for those who know the history of this international organization or who like radio games: Julian Huxley was appointed as the first Director-General of UNESCO in 1946. But, beyond this evidence, there is a deeper link that highlights the history of the renewal of the idea of transhumanism (I) and questions about the role that UNESCO has, among the other international organizations (II). © 2021 University of Rijeka, Faculty of Medicine. All rights reserved.
58
Misattribution of Error Origination: The Impact of Preconceived Expectations in Co-Operative Online GamesMilanovic K., Pitt J.,2021DIS 2021 - Proceedings of the 2021 ACM Designing Interactive Systems Conference:
Nowhere and Everywhere
Conference Paper10.1145/3461778.3462043
As artificial intelligence and smart devices increasingly infiltrate everyday life, cooperative interactions between humans and computers are correspondingly becoming more common. Errors are an inevitability in these interactions and can destabilise long-term working relationships. In this work, three online games of increasing difficulty (N=2037) were designed where participants played a cooperative game with an artificial AI player and encountered an unexpected error. Different training methods were used to establish a rapport between the human and AI players. Overall, despite answering correctly, participants were increasingly more likely to say they had made a mistake and that they were to blame as the difficulty of the game increased. Since participants were also unaware of the extent of their exposure to AI, this study shows that there is a tendency to apply preconceived expectations of AI and misattribute error origination which, if not addressed, could lead to critical breakdowns of trust. © 2021 ACM.
59
New technologies in educationLarchenko V., Barynikova O.,2021E3S Web of ConferencesConference Paper10.1051/e3sconf/202127312145
This article contains a study on the topic Digitalization and education: the current state and prospects. It has analyzed such aspects of digital education as digital literacy, mobile digital education, and the ethics of artificial intelligence in education, VR and AR technologies in digital education, robotics training in educational institutions. Electronic textbooks, digital education software are very useful in development of foreign language learning right now. The purpose of this article is to familiarize with the results of the study on the introduction of digitalization in the educational process. The most attention is paid to its growing role attention in the condition of pandemic, when the processes of learning and teaching have to be changed in form of new material giving and finding. As it turned out the students have obtained better knowledge thanks to new digital approaches in educational process. As for teachers, they were more effective in monitoring student knowledge allowing them more independence. Conclusions have been drawn on the effectiveness of each aspect and digital education in general. Statistics on access to digital technologies, frequency of software use, percentage of digital technology ownership has also identified. © The Authors, published by EDP Sciences, 2021.
60
AI in My Life: AIBendechache M., Tal I., Wall P., Grehan L., Clarke E., Odriscoll A., Der Haegen L.V., Leong B., Kearns A., Brennan R.,2021ACM International Conference Proceeding SeriesConference Paper10.1145/3462741.3466664
AI in My Life' project will engage 500 Dublin teenagers from disadvantaged backgrounds in a 15-week (20-hour) co-created, interactive workshop series encouraging them to reflect on their experiences in a world shaped by Artificial Intelligence (AI), personal data processing and digital transformation. Students will be empowered to evaluate the ethical and privacy implications of AI in their lives, to protect their digital privacy and to activate STEM careers and university awareness. It extends the ĝ€DCU TY' programme for innovative educational opportunities for Transition Year students from underrepresented communities in higher education. Privacy and cybersecurity researchers and public engagement professionals from the SFI Centres ADAPT1 and Lero2 will join experts from the Future of Privacy Forum3 and the INTEGRITY H20204 project to deliver the programme to the DCU Access5 22-school network. DCU Access has a mission of creating equality of access to third-level education for students from groups currently underrepresented in higher education. Each partner brings proven training activities in AI, ethics and privacy. A novel blending of material into a youth-driven narrative will be the subject of initial co-creation workshops and supported by pilot material delivery by undergraduate DCU Student Ambassadors. Train-The-Trainer workshops and a toolkit for teachers will enable delivery. The material will use a blended approach (in person and online) for delivery during COVID-19. It will also enable wider use of the material developed. An external study of programme effectiveness will report on participants': enhanced understanding of AI and its impact, improved data literacy skills in terms of their understanding of data privacy and security, empowerment to protect privacy, growth in confidence in participating in public discourse about STEM, increased propensity to consider STEM subjects at all levels, and greater capacity of teachers to facilitate STEM interventions. This paper introduces the project, presents more details about co-creation workshops that is a particular step in the proposed methodology and reports some preliminary results. © 2021 Owner/Author.
61
The responsibility of social media in times of societal and political manipulationReisach U.,2021European Journal of Operational ResearchArticle10.1016/j.ejor.2020.09.020
The way electorates were influenced to vote for the Brexit referendum, and in presidential elections both in Brazil and the USA, has accelerated a debate about whether and how machine learning techniques can influence citizens’ decisions. The access to balanced information is endangered if digital political manipulation can influence voters. The techniques of profiling and targeting on social media platforms can be used for advertising as well as for propaganda: Through tracking of a person's online behaviour, algorithms of social media platforms can create profiles of users. These can be used for the provision of recommendations or pieces of information to specific target groups. As a result, propaganda and disinformation can influence the opinions and (election) decisions of voters much more powerfully than previously. In order to counter disinformation and societal polarization, the paper proposes a responsibility-based approach for social media platforms in diverse political contexts. Based on the implementation requirements of the “Ethics Guidelines for Trustworthy Artificial Intelligence” of the European Commission, the ethical principles will be operationalized, as far as they are directly relevant for the safeguarding of democratic societies. The resulting suggestions show how the social media platform providers can minimize risks for societies through responsible action in the fields of human rights, education and transparency of algorithmic decisions. © 2020 The Author
62
Status quo and future prospects of artificial neural network from the perspective of gastroenterologistsCao B., Zhang K.-C., Wei B., Chen L.,2021World Journal of GastroenterologyReview10.3748/wjg.v27.i21.2681
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public. © The Author(s) 2021. Published by Baishideng Publishing Group Inc. All rights reserved.
63
Ideological and Political Reform of International Business Major in the Era of Artificial Intelligence : Take the course Introduction to International Business as an example
Yan C.,2021Proceedings - 2021 2nd International Conference on Artificial Intelligence and
Education
Conference Paper10.1109/ICAIE53562.2021.00112
With the help of professional course teaching to carry out ideological and political education has become a hot topic in the new era. In the era of artificial intelligence, the application of new media and new technology can make the traditional advantages of Ideological and political education highly integrated with information technology, enhance the sense of the times and attraction of Ideological and political education, innovate teaching mode, promote the seamless docking and deep integration of artificial intelligence technology and ideological and political education, and promote ideological and political education into a new era of intelligence. This paper takes 'Introduction to international business' as an example, discusses the teaching reform of Ideological and political education in the era of artificial intelligence, and discusses the elements of Ideological and political education. By improving the quality and ability of teachers' ethics, adopting intelligent education methods and establishing diversified evaluation system of artificial intelligence, the course of introduction to international business is deeply integrated with ideological and political education. © 2021 IEEE.
64
Academic Mindtrek 2021 - Proceedings of the 24th International Academic Mindtrek Conference[No author name available],2021ACM International Conference Proceeding Series
Conference Review
The proceedings contain 24 papers. The topics discussed include: researchers’ toolbox for the future: understanding and designing accessible and inclusive artificial intelligence (AIAI)
65
Overcoming barriers to implementation of artificial intelligence in gastroenterologySutton R.A., Sharma P.,2021Best Practice and Research: Clinical GastroenterologyReview10.1016/j.bpg.2021.101732
Artificial intelligence is poised to revolutionize the field of medicine, however significant questions must be answered prior to its implementation on a regular basis. Many artificial intelligence algorithms remain limited by isolated datasets which may cause selection bias and truncated learning for the program. While a central database may solve this issue, several barriers such as security, patient consent, and management structure prevent this from being implemented. An additional barrier to daily use is device approval by the Food and Drug Administration. In order for this to occur, clinical studies must address new endpoints, including and beyond the traditional bio- and medical statistics. These must showcase artificial intelligence's benefit and answer key questions, including challenges posed in the field of medical ethics. © 2021 Elsevier Ltd
66
Artificially Intelligent Technology for the Margins: A Multidisciplinary Design AgendaTachtler F., Aal K., Ertl T., Diethei D., Niess J., Khwaja M., Talhouk R., Vilaza G.N., Lazem S., Singh A., Barry M., Wulf V., Fitzpatrick G.,2021Conference on Human Factors in Computing Systems - ProceedingsConference Paper10.1145/3411763.3441333
There has been increasing interest in socially just use of Artificial Intelligence (AI) and Machine Learning (ML) in the development of technology that may be extended to marginalized people. However, the exploration of such technologies entails the development of an understanding of how they may increase and/or counter marginalization. The use of AI/ML algorithms can lead to several challenges, such as privacy and security concerns, biases, unfairness, and lack of cultural awareness, which especially affect marginalized people. This workshop will provide a forum to share experiences and challenges of developing AI/ML health and social wellbeing technologies with/for marginalized people and will work towards developing design methods to engage in the re-envisioning of AI/ML technologies for and with marginalized people. In doing so we will create cross-research area dialogues and collaborations. These discussions build a basis to (1) explore potential tools to support designing AI/ML systems with marginalized people, and (2) develop a design agenda for future research and AI/ML technology for and with marginalized people. © 2021 Owner/Author.
67
Information and communication technology use in suicide prevention: Scoping reviewRassy J., Bardon C., Dargis L., Côté L.-P., Corthésy-Blondin L., Mörch C.-M., Labelle R.,2021Journal of Medical Internet ResearchReview10.2196/25288
Background: The use of information and communication technology (ICT) in suicide prevention has progressed rapidly over the past decade. ICT plays a major role in suicide prevention, but research on best and promising practices has been slow. Objective: This paper aims to explore the existing literature on ICT use in suicide prevention to answer the following question: what are the best and most promising ICT practices for suicide prevention? Methods: A scoping search was conducted using the following databases: PubMed, PsycINFO, Sociological Abstracts, and IEEE Xplore. These databases were searched for articles published between January 1, 2013, and December 31, 2018. The five stages of the scoping review process were as follows: identifying research questions
68
Challenging issues in rheumatology: thoughts and perspectivesLim N., Wise L., Panush R.S.,2021Clinical RheumatologyEditorial10.1007/s10067-021-05709-4[No abstract available]
69
Machine Learning in Clinical Psychology and Psychotherapy Education: A Mixed Methods Pilot Survey of Postgraduate Students at a Swiss UniversityBlease C., Kharko A., Annoni M., Gaab J., Locher C.,2021Frontiers in Public HealthArticle10.3389/fpubh.2021.623088
Background: There is increasing use of psychotherapy apps in mental health care. Objective: This mixed methods pilot study aimed to explore postgraduate clinical psychology students' familiarity and formal exposure to topics related to artificial intelligence and machine learning (AI/ML) during their studies. Methods: In April-June 2020, we conducted a mixed-methods online survey using a convenience sample of 120 clinical psychology students enrolled in a two-year Masters' program at a Swiss University. Results: In total 37 students responded (response rate: 37/120, 31%). Among respondents, 73% (n = 27) intended to enter a mental health profession, and 97% reported that they had heard of the term “machine learning.” Students estimated 0.52% of their program would be spent on AI/ML education. Around half (46%) reported that they intended to learn about AI/ML as it pertained to mental health care. On 5-point Likert scale, students “moderately agreed” (median = 4) that AI/M should be part of clinical psychology/psychotherapy education. Qualitative analysis of students' comments resulted in four major themes on the impact of AI/ML on mental healthcare: (1) Changes in the quality and understanding of psychotherapy care
70
The Human in the Middle: Artificial Intelligence in Health Care Summary Proceedings Symposium Presentation and Reactor Panel of Experts Thomas Jefferson University December 10
Clarke J., Skoufalos A., Klasko S.K.,2021Population Health ManagementConference Paper10.1089/pop.2020.0030[No abstract available]
71
Learning ethics in AI-teaching non-engineering undergraduates through situated learningShih P.-K., Lin C.-H., Wu L.Y., Yu C.-C.,2021Sustainability (Switzerland)Article10.3390/su13073718
Learning about artificial intelligence (AI) has become one of the most discussed topics in the field of education. However, it has become an equally important learning approach in contemporary education to propose a “general education” agenda that conveys instructional messages about AI basics and ethics, especially for those students without an engineering background. The current study proposes a situated learning design for education on this topic. Through a three-week lesson session and accompanying learning activities, the participants undertook hands-on tasks relating to AI. They were also afforded the opportunity to learn about the current attributes of AI and how these may apply to understanding AI-related ethical issues or problems in daily life. A preand post-test design was used to compare the learning effects with respect to different aspects of AI (e.g., AI understanding, cross-domain teamwork, AI attitudes, and AI ethics) among the participants. The study found a positive correlation among all the factors, as well as a strong link between AI understanding and attitudes on the one hand and AI ethics on the other. The implications of these findings are discussed, and suggestions are made for possible future revisions to current instructional design and for future research. © 2021 by the authors.
72
Optimal Conflict in Team-Based Laboratory CultureSen C.K.,2021Antioxidants and Redox SignalingReview10.1089/ars.2020.8225
One critical determinant of success that is not part of standardized scientific training programs is the development of the right mindset for competitive team science. Mindset has been categorized as fixed and growth. People with fixed mindset who believe that virtues such as goodness and intelligence are naturally endowed and thus fixed are reportedly less likely to succeed than people with growth mindset who believe that such abilities are malleable and scalable. People with growth mindset handle conflicts more effectively. As it stands in academic culture, mostly dominated by the education mission, conflict is a taboo. Administrators generally view conflict as something that must be avoided or resolved. Yet the American Psychological Association, among many others, recognize that good science requires good conflict. Team science efforts must recognize the perils of artificial harmony. Artificial harmony is a state wherein members of the team act as if they are getting along in a setting where serious issues remain unattended. Artificial harmony stifles open communication. Open communication within the team is essential to uphold rigor in science. The threat of conflict triggers the flight or fight response in us. Flight, motivated by conflict avoidance, favors artificial harmony. Fight, in its optimal form, empowers teammates to express their opinion leading to healthy disagreement and debate. Teams must find their own optimal conflict point. Mastering that art of identifying and achieving the optimal conflict point for any given team will return lucrative dividends in the form of competitive edge. © Copyright 2021, Mary Ann Liebert, Inc., publishers 2021.
73
Trust and medical AI: the challenges we face and the expertise needed to overcome themQuinn T.P., Senadeera M., Jacobs S., Coghlan S., Le V.,2021Journal of the American Medical Informatics Association : JAMIAArticle10.1093/jamia/ocaa268
Artificial intelligence (AI) is increasingly of tremendous interest in the medical field. How-ever, failures of medical AI could have serious consequences for both clinical outcomes and the patient experience. These consequences could erode public trust in AI, which could in turn undermine trust in our healthcare institutions. This article makes 2 contributions. First, it describes the major conceptual, technical, and humanistic challenges in medical AI. Second, it proposes a solution that hinges on the education and accreditation of new expert groups who specialize in the development, verification, and operation of medical AI technologies. These groups will be required to maintain trust in our healthcare institutions. © The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.
74
Developing Middle School Students' AI LiteracyLee I., Ali S., Zhang H., Dipaola D., Breazeal C.,2021SIGCSE 2021 - Proceedings of the 52nd ACM Technical Symposium on Computer
Science Education
Conference Paper10.1145/3408877.3432513
In this experience report, we describe an AI summer workshop designed to prepare middle school students to become informed citizens and critical consumers of AI technology and to develop their foundational knowledge and skills to support future endeavors as AI-empowered workers. The workshop featured the 30-hour "Developing AI Literacy"or DAILy curriculum that is grounded in literature on child development, ethics education, and career development. The participants in the workshop were students between the ages of 10 and 14
75
you can't sit with us: Exclusionary pedagogy in AI ethics educationRaji I.D., Scheuerman M.K., Amironesei R.,2021FAccT 2021 - Proceedings of the 2021 ACM Conference on FairnessConference Paper
Given a growing concern about the lack of ethical consideration in the Artificial Intelligence (AI) field, many have begun to question how dominant approaches to the disciplinary education of computer science (CS) - -and its implications for AI - -has led to the current "ethics crisis". However, we claim that the current AI ethics education space relies on a form of "exclusionary pedagogy,"where ethics is distilled for computational approaches, but there is no deeper epistemological engagement with other ways of knowing that would benefit ethical thinking or an acknowledgement of the limitations of uni-vocal computational thinking. This results in indifference, devaluation, and a lack of mutual support between CS and humanistic social science (HSS), elevating the myth of technologists as "ethical unicorns"that can do it all, though their disciplinary tools are ultimately limited. Through an analysis of computer science education literature and a review of college-level course syllabi in AI ethics, we discuss the limitations of the epistemological assumptions and hierarchies of knowledge which dictate current attempts at including ethics education in CS training and explore evidence for the practical mechanisms through which this exclusion occurs. We then propose a shift towards a substantively collaborative, holistic, and ethically generative pedagogy in AI education. © 2021 ACM.
76
Computer science communities: Who is speakingCheong M., Leins K., Coghlan S.,2021FAccT 2021 - Proceedings of the 2021 ACM Conference on FairnessConference Paper
Those working on policy, digital ethics and governance often refer to issues in 'computer science', that includes, but is not limited to, common subfields such as Artificial Intelligence (AI), Computer Science (CS) Computer Security (InfoSec), Computer Vision (CV), Human Computer Interaction (HCI), Information Systems, (IS), Machine Learning (ML), Natural Language Processing (NLP) and Systems Architecture. Within this framework, this paper is a preliminary exploration of two hypotheses, namely 1) Each community has differing inclusion of minoritised groups (using women as our test case, by identifying female-sounding names)
77
Toward a More Equal World: The Human Rights Approach to Extending the Benefits of Artificial IntelligenceGibbons E.D.,2021IEEE Technology and Society MagazineArticle10.1109/MTS.2021.3056295
We are all aware of the huge potential for artificial intelligence (AI) to bring massive benefits to under-served populations, advancing equal access to public services such as health, education, social assistance, or public transportation, for example. We are equally aware that AI can drive inequality, concentrating wealth, resources, and decision-making power in the hands of a few countries, companies, or citizens. Artificial intelligence for equity (AI4Eq) [1] as presented in this magazine, calls upon academics, AI developers, civil society, and government policy-makers to work collaboratively toward a technological transformation that increases the benefits to society, reduces inequality, and aims to leave no one behind. A call for equity rests on the human rights principle of equality and nondiscrimination. AI design, development, and deployment (AI-DDD) can and should be harnessed to reduce inequality and increase the share of the world's population that is able to live in dignity and fully realize their human potential. This commentary argues, first, that far preferable to an ethics framework, adopting a human rights framework for AI-DDD offers the potential for a robust and enforceable set of guidelines for the pursuit of AI4Eq. Second, the commentary introduces the work of IEEE in proposing practical recommendations for AI4Eq, so that people living in high-income countries (HICs), low- and middle-income countries (LMICs), alike, share AI applications' widespread benefit to humanity. © 1982-2012 IEEE.
78
Big Data and Language Learning: Opportunities and ChallengesGodwin-Jones R.,2021Language Learning and TechnologyArticle
Data collection and analysis is nothing new in computer-assisted language learning, but with the phenomenon of massive sets of human language collected into corpora, and especially integrated into systems driven by artificial intelligence, new opportunities have arisen for language teaching and learning. We are now seeing powerful artificial neural networks with impressive language capabilities. In education, data provides means to track learner performance and improve learning, especially through the application of data mining to expose hidden patterns of learner behavior. Massive data collection also raises issues of transparency and fairness. Human monitoring is essential in applying data analysis equitably. Big data may have as powerful an impact in language learning as it is having in society generally
79
Lessons learned in medical education research: seeing opportunity amidst the challengesBuléon C., Minehart R.D.,2021International Journal of Obstetric AnesthesiaEditorial10.1016/j.ijoa.2020.11.008[No abstract available]
80
The Role of Analyst Engineer in Algorithm Life and Social CycleGosudarkin Y.S., Krinkin K.V., Takmakov M.V., Sharakhina L.V.,2021Proceedings of the 2021 IEEE Conference of Russian Young Researchers in Electrical
and Electronic Engineering
Conference Paper
10.1109/ElConRus51938.2021.9396293
The problems and perspectives of ethics based approach development of AI algorithms are presented in the article. The issue of ethical limitations and responsibilities of AI software developers during the life cycle of algorithms is examined. The analyses of corporate AI ethics guidelines and precedents in their application for analyst engineers' professional activities serve us empirical examples.The authors reveal potential impact of algorithms on different spheres of human lives such as education, medicine, enterprises, etc. in connection to rising control power of employer over the employees. Potential algorithms failures and engineers' illegal labour practices are also considered.Such examples and possibilities of ethical violations need to be accessibly documented and introduced to AI developers departments. The terms of potential engineers and employers interactions are to be set with the option of mutual assessments and tools for algorithm maintenance checkup. © 2021 IEEE.
81
The role and challenges of education for responsible aiDignum V.,2021London Review of EducationArticle10.14324/LRE.19.1.01
Artificial intelligence (AI) is impacting education in many different ways. From virtual assistants for personalized education, to student or teacher tracking systems, the potential benefits of AI for education often come with a discussion of its impact on privacy and well-being. At the same time, the social transformation brought about by AI requires reform of traditional education systems. This article discusses what a responsible, trustworthy vision for AI is and how this relates to and affects education. © 2021 Dignum.
82
Artificial intelligence and reflections from educational landscape: A review of AI studies in half a centuryBozkurt A., Karadeniz A., Baneres D., Guerrero-Roldán A.E., Rodríguez M.E.,2021Sustainability (Switzerland)Article10.3390/su13020800
Artificial intelligence (AI) has penetrated every layer of our lives, and education is not immune to the effects of AI. In this regard, this study examines AI studies in education in half a century (1970-2020) through a systematic review approach and benefits from social network analysis and text-mining approaches. Accordingly, the research identifies three research clusters (1) artificial intelligence, (2) pedagogical, and (3) technological issues, and suggests five broad research themes which are (1) adaptive learning and personalization of education through AI-based practices, (2) deep learning and machine Learning algorithms for online learning processes, (3) Educational human-AI interaction, (4) educational use of AI-generated data, and (5) AI in higher education. The study also highlights that ethics in AI studies is an ignored research area. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
83
Conceptualizing AI literacy: An exploratory reviewNg D.T.K., Leung J.K.L., Chu S.K.W., Qiao M.S.,2021Computers and Education: Artificial IntelligenceArticle10.1016/j.caeai.2021.100041
Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use and apply, evaluate and create, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy. © 2021 The Authors
84
Teaching AI Ethics to Engineering Students: Reflections on Syllabus Design and Teaching MethodsTuovinen L., Rohunen A.,2021CEUR Workshop ProceedingsConference Paper
The importance of ethics in artificial intelligence is increasing, and this must be reflected in the contents of computer engineering curricula, since the researchers and engineers who develop artificial intelligence technologies and applications play a key part in anticipating and mitigating their harmful effects. However, there are still many open questions concerning what should be taught and how. In this paper we suggest an approach to building a syllabus for a course in ethics of artificial intelligence, make some observations concerning effective teaching methods and discuss some particular challenges that we have encountered. These are based on the pilot implementation of a new course that aimed to give engineering students a comprehensive overview of the ethical and legislative aspects of artificial intelligence, covering both knowledge of issues that the students should be aware of and skills that they will need in order to competently deal with those issues in their work. The course was well received by the students, but also criticized for its high workload. Substantial difficulties were experienced in trying to inspire the students to engage in discussions and debates among themselves, which may limit the effectiveness of the course in building the students' ethical argumentation skills unless a satisfactory solution is found. Several promising ideas for future development of our teaching practices can be found in the literature. © 2021 CEUR-WS. All rights reserved.
85
Working-life ethical issues faced by engineersVirta U.-T., Järvinen H.-M.,2021Proceedings - SEFI 49th Annual Conference: Blended Learning in Engineering
Education: Challenging
Conference Paper
In recent years, there have been public discussions about novel ethical issues emerging from new engineering fields, such as the usage of artificial intelligence. While those are important issues to discuss, they do not necessarily reflect the ethical issues engineers face in their work. In this paper, we discuss problems that engineers of different disciplines face in their professional life, based on a survey sent to members of the Association of Academic Engineers and Architects in Finland. From the 433 respondents, we received over 130 descriptions of ethical issues encountered within their professional lives. We divided the encountered issues of the survey into two main categories: ethical issues about general work life and those on more engineering-specific situations. The focus of this paper is on the engineering specific ethical issues and the reactions they encounter. We discuss about who noticed the problems and how the workplaces reacted to the issues. In addition, it is addressed whether companies have policies in place to handle ethical issues. Furthermore, we discuss the types of support the engineers indicated hoping to receive from different stakeholders. On a larger scale, the goal is also to gather knowledge on how to improve engineering education to meet the needs of future engineers on ethical issues. © 2021 Proceedings - SEFI 49th Annual Conference: Blended Learning in Engineering Education: Challenging, Enlightening - and Lasting?. All Rights Reserved.
86
Impact of Artificial Intelligence on Engineering: PastBlake R.W., Mathew R., George A., Papakostas N.,2021Procedia CIRPConference Paper10.1016/j.procir.2021.11.291
Recent advancements in cloud computing and software technology have resulted in the development of powerful Artificial Intelligence (AI) tools for engineering applications. However, the impact of AI in future engineering jobs remains ambiguous. This paper discusses recent AI developments, AI applications, the influence of AI on the Engineering profession, and the productivity of engineers. In addition, ethics, and professional impacts to be considered with the introduction of AI are addressed. The results of a survey conducted among people from Engineering colleges across Ireland are also presented. © 2021 The Author(s).
87
A decolonial approach to AI in higher education teaching and learning: strategies for undoing the ethics of digital neocolonialismZembylas M.,2021LearningArticle
The aim of this article is to use decolonial thinking, as applied in the field of AI, to explore the ethical and pedagogical implications for higher education teaching and learning. The questions driving this article are: What does a decolonial approach to AI imply for higher education teaching and learning? How can educators, researchers and students interrogate the coloniality of AI in higher education? Which strategies can be useful for undoing the ethics of digital neocolonialism in higher education? While there is work on decolonial theory in AI as well as literature on the decolonization of higher education, there is not much theorization that brings those literatures together to develop a decolonial conceptual framework for ethical AI in higher education teaching and learning. This article offers this conceptual framing and suggests decolonial strategies that challenge algorithmic coloniality and colonial AI ethics in the context of higher education teaching and learning. © 2021 Informa UK Limited, trading as Taylor & Francis Group.
88
Introducing a multi-stakeholder perspective on opacityLanger M., König C.J.,2021Human Resource Management ReviewArticle10.1016/j.hrmr.2021.100881
Artificial Intelligence and algorithmic technologies support or even automate a large variety of human resource management (HRM) activities. This affects a range of stakeholders with different, partially conflicting perspectives on the opacity and transparency of algorithm-based HRM. In this paper, we explain why opacity is a key characteristic of algorithm-based HRM, describe reasons for opaque algorithm-based HRM, and highlight the implications of opacity from the perspective of the main stakeholders involved (users, affected people, deployers, developers, and regulators). We also review strategies to reduce opacity and promote transparency of algorithm-based HRM (technical solutions, education and training, regulation and guidelines), and emphasize that opacity and transparency in algorithm-based HRM can Yesultaneously have beneficial and detrimental consequences that warrant taking a multi-stakeholder view when considering these consequences. We conclude with a research agenda highlighting stakeholders' interests regarding opacity, strategies to reduce opacity, and consequences of opacity and transparency in algorithm-based HRM. © 2021 Elsevier Inc.
89
It is only for your own goodBenner D., Schöbel S., Janson A.,202127th Annual Americas Conference on Information SystemsConference Paper
Persuasive designs, including gamification and digital nudging, have become well renowned during the last years and have been implemented successfully across different sectors including education, e-health, e-governance, e-finance and general information systems. In this regard, persuasive design can support desirable changes in attitude and behavior of users in order to achieve their own goals. However, such persuasive influence on individuals raises ethical questions as persuasive designs can impair the autonomy of users or persuade the user towards goals of a third party and hence lead to unethical decision-making processes. In human-computer interaction this is especially significant with the advent of advanced artificial intelligence that can emulate human behavior and thus bring new dynamics into play. Therefore, we conduct a systematic literature analysis with the goal to compile an overview of ethical considerations for persuasive system design, derive potential guidelines for ethical persuasive designs and shed light on potential research gaps. © AMCIS 2021.
90
Practical and artificial intelligence. Hannah Arendt's ethics in "vita activa und the human condition"Gatt M.,2021Proceedings of INTER-NOISE 2021 - 2021 International Congress and Exposition of
Noise Control Engineering
Conference Paper10.3397/IN-2021-1618
Hannah Arendt's work "Vita Activa and The Human Condition" is considered one of the most important ethical and moral writings of our time. The philosopher understands practical life as an inescapable human condition, as condicio humana. Practical philosophy, which includes ethics and morality, is situated between work and society. There, technology finds its application. Meanwhile technology's influence has gained the ability to shape culture, as artificial intelligence (AI) and transhumanism show. This influence has come to be viewed increasingly critically by scientists, who see it as often violating human boundaries. In the ethical evaluation of a course of action oriented on technology, we follow a traditionally Aristotelian distinction between poesis (greek) and praxis (greek). With poesis we mean an instrumental, purposeful production process, which is realized through implementation and completed in the finished product. With praxis we define human activity, or communal work. Arendt interprets the technical processes of conception and work as creativity. As homofaber, the "tool-maker", we want to make the world more beautiful and useful. As animal laborans, we want to make our lives easier and longer. Today, many of us try to orient ourselves in our everyday lives through technology, such as voice-controlled software. However, in order orient ourselves in the world, it is not technology that is necessary, but rather human intelligence and practical action. This paper illuminates Arendt's interpretation of the human condition as practical action, and emphasizes the lessons it provides for ethical education in acoustic engineering. © INTER-NOISE 2021 .All right reserved.
91
AI technologies for education: Recent research & future directionsZhang K., Aslan A.B.,2021Computers and Education: Artificial IntelligenceReview10.1016/j.caeai.2021.100025
From unique educational perspectives, this article reports a comprehensive review of selected empirical studies on artificial intelligence in education (AIEd) published in 1993–2020, as collected in the Web of Sciences database and selected AIEd-specialized journals. A total of 40 empirical studies met all selection criteria, and were fully reviewed using multiple methods, including selected bibliometrics, content analysis and categorical meta-trends analysis. This article reports the current state of AIEd research, highlights selected AIEd technologies and applications, reviews their proven and potential benefits for education, bridges the gaps between AI technological innovations and their educational applications, and generates practical examples and inspirations for both technological experts that create AIEd technologies and educators who spearhead AI innovations in education. It also provides rich discussions on practical implications and future research directions from multiple perspectives. The advancement of AIEd calls for critical initiatives to address AI ethics and privacy concerns, and requires interdisciplinary and transdisciplinary collaborations in large-scaled, longitudinal research and development efforts. © 2021 The Authors
92
Towards moral machines: A discussion with michael anderson and susan leigh andersonAnderson M., Anderson S.L., Gounaris G., Kosteletos G.,2021Conatus - Journal of PhilosophyArticle10.12681/cjp.26832
At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives, in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values. © 2021 George Kosteletos, Alkis Gounaris, Michael Anderson, Susan Leigh Anderson.
93
Moral Considerations of Artificial IntelligenceSun F., Ye R.,2021Science and EducationArticle10.1007/s11191-021-00282-3
One of the ultimate problems of moral philosophy is to determine who or what is worth moral consideration or not. “Morality” is a relative concept, which changes significantly with the environment and time. This means that morality is incredibly inclusive. The emergence of AI technology has a significant impact on the understanding and distribution of “subject,” which has produced a new situation in moral issues. When considering the morality of AI, moral problems must also involve moral agents and moral patients. A more inclusive moral definition is necessary for extending the scope of moral consideration to other traditionally marginalized entities. The evolving ethics redefines the center of moral consideration, effectively reduces the differences, becomes more inclusive, and includes more potential participants. But we may still need to jump out of this binary framework and solve the problem by rewriting rules. It is a huge, complex systematic project to realize moral AI in education. To be a “trustworthy” and “responsible” companion of teachers and students, educational AI must have extensive consistency with teachers and students in terms of the moral theoretical basis and expected value. The deep integration of AI and education is likely to become the development trend of education in the future. © 2021, The Author(s), under exclusive licence to Springer Nature B.V.
94
Artificial intelligence for career guidance - current requirements and prospects for the futureWestman S., Kauttonen J., Klemetti A., Korhonen N., Manninen M., Mononen A., Niittymäki S., Paananen H.,2021IAFOR Journal of EducationArticle10.22492/ije.9.4.03
Career guidance in the era of life-long learning faces challenges related to building accessible services that bridge education and employment services. So far, only limited research has been conducted on using artificial intelligence to support guidance across higher education and working life. This paper reports on development on using artificial intelligence to support and further career guidance in higher education institutions. Results from focus groups, scenario work and practical trials are presented, mapping requirements and possibilities for using artificial intelligence in career guidance from the viewpoints of students, guidance staff and institutions. The findings indicate potential value and functions as well as drivers and barriers for adopting artificial intelligence in career guidance to support higher education and life-long learning. The authors conceptualize different modes of agency and maturity levels for the involvement of artificial intelligence in guidance processes based on the results. Recommended future research topics in the area of artificially enhanced guidance services include agency in guidance interaction, developing guidance data ecosystem and ethical issues. © 2021, The International Academic Forum (IAFOR). All rights reserved.
95
Artificial intelligence robot lawRébé N.,2021Nijhoff Law SpecialsBook Chapter
Education has for ages dealt with the notion of the master teaching its student about life and disciplinary matters. The type of course we decide to study is only a bonus to this relationship, external to our family life. New technolo­gies can worry old professors in that they could feel that no machine or algo­rithm could ever replace such bond between humans. We have seen with the 2019– 2020 corona virus pandemic the incredible benefits man can reap from online learning. The algorithms tailormade education AI can provide students is unavoidable, but had barely evolved the past decades in relation to what we could really do if we wanted to set up a worldwide AI educational system. Man might have to accept that certain exceptional life situations could require the inclusion of new technologies in our educational systems to avoid children’s isolation and help them enhance their school’s knowledge. Harmonization of school systems must come through international agreements on specific learning standards and methods. Until then, nations should promote the use of AI in education and research. © 2021 Brill Nijhoff. All rights reserved.
96
Stakeholders’ perspectives on the future of artificial intelligence in radiology: a scoping reviewYang L., Ene I.C., Arabi Belaghi R., Koff D., Stein N., Santaguida P.L.,2021European RadiologyReview10.1007/s00330-021-08214-z
Objectives: Artificial intelligence (AI) has the potential to impact clinical practice and healthcare delivery. AI is of particular significance in radiology due to its use in automatic analysis of image characteristics. This scoping review examines stakeholder perspectives on AI use in radiology, the benefits, risks, and challenges to its integration. Methods: A search was conducted from 1960 to November 2019 in EMBASE, PubMed/MEDLINE, Web of Science, Cochrane Library, CINAHL, and grey literature. Publications reflecting stakeholder attitudes toward AI were included with no restrictions. Results: Commentaries (n = 32), surveys (n = 13), presentation abstracts (n = 8), narrative reviews (n = 8), and a social media study (n = 1) were included from 62 eligible publications. These represent the views of radiologists, surgeons, medical students, patients, computer scientists, and the general public. Seven themes were identified (predicted impact, potential replacement, trust in AI, knowledge of AI, education, economic considerations, and medicolegal implications). Stakeholders anticipate a significant impact on radiology, though replacement of radiologists is unlikely in the near future. Knowledge of AI is limited for non-computer scientists and further education is desired. Many expressed the need for collaboration between radiologists and AI specialists to successfully improve patient care. Conclusions: Stakeholder views generally suggest that AI can improve the practice of radiology and consider the replacement of radiologists unlikely. Most stakeholders identified the need for education and training on AI, as well as collaborative efforts to improve AI implementation. Further research is needed to gain perspectives from non-Western countries, non-radiologist stakeholders, on economic considerations, and medicolegal implications. Key Points: Stakeholders generally expressed that AI alone cannot be used to replace radiologists. The scope of practice is expected to shift with AI use affecting areas from image interpretation to patient care.Patients and the general public do not know how to address potential errors made by AI systems while radiologists believe that they should be “in-the-loop” in terms of responsibility. Ethical accountability strategies must be developed across governance levels.Students, residents, and radiologists believe that there is a lack in AI education during medical school and residency. The radiology community should work with IT specialists to ensure that AI technology benefits their work and centres patients. © 2021, European Society of Radiology.
97
Using the Design of Adversarial Chatbots as a Means to Expose Computer Science Students to the Importance of Ethics and Responsible Design of AI Technologies
Weiss A., Vrecar R., Zamiechowska J., Purgathofer P.,2021Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics)
Conference Paper10.1007/978-3-030-85613-7_24
This paper presents a reflection on a master class on “Responsible Design of AI” aimed at raising critical thinking among students about the pros and cons of AI technology in everyday life usage on the example of chatbots. In contrast to typical approaches teaching existing policies and design guidelines, we aimed to challenge students by setting up a project on the “most unethical chatbot imaginable”. Our teaching concept therefore builds on students’ self-identified issues and concerns and develops guidelines for ethical chatbot design according to students’ interpretations of the capabilities and potential applications of these technologies. In our teaching we particularly focused on supporting mutual learning between teachers, students, and experts as foundational aspects. We conclude with reflections from the students regarding how this teaching approach can contribute to establish a critical and reflective mindset for future HCI researchers and developers. © 2021, IFIP International Federation for Information Processing.
98
Education for AISchiff D.,2021International Journal of Artificial Intelligence in EducationArticle10.1007/s40593-021-00270-2
As of 2021, more than 30 countries have released national artificial intelligence (AI) policy strategies. These documents articulate plans and expectations regarding how AI will impact policy sectors, including education, and typically discuss the social and ethical implications of AI. This article engages in thematic analysis of 24 such national AI policy strategies, reviewing the role of education in global AI policy discourse. It finds that the use of AI in education (AIED) is largely absent from policy conversations, while the instrumental value of education in supporting an AI-ready workforce and training more AI experts is overwhelmingly prioritized. Further, the ethical implications of AIED receive scant attention despite the prominence of AI ethics discussion generally in these documents. This suggests that AIED and its broader policy and ethical implications—good or bad—have failed to reach mainstream awareness and the agendas of key decision-makers, a concern given that effective policy and careful consideration of ethics are inextricably linked, as this article argues. In light of these findings, the article applies a framework of five AI ethics principles to consider ways in which policymakers can better incorporate AIED’s implications. Finally, the article offers recommendations for AIED scholars on strategies for engagement with the policymaking process, and for performing ethics and policy-oriented AIED research to that end, in order to shape policy deliberations on behalf of the public good. © 2021, International Artificial Intelligence in Education Society.
99
Enduring QuestionsPapa R., Jackson K.M.,2021Lecture Notes in Networks and SystemsConference Paper10.1007/978-3-030-80126-7_51
This paper aims to tie literature in AI to enduring questions in education about teaching and learning and discern ethical considerations that define those ties. The challenge was to answer the question: how do we merge our learning and leadership theories to technologies and the algorithmic biases that may maintain today's social injustices into our future? The paper first reviews the literature to identify the dialogue on AI by computer scientists in relation to enduring questions in education, learning theories, and ethics. Then we summarize data in the form of vignettes written by experts from the humanities, computer science, and social sciences. Some of the vignettes focused on how educational and technological systems are products of the social system and the ethical implications of such connections. Other writings centered data-driven approaches to incorporating AI technologies in classrooms, with concerns around uneven implementation and differential access. The paper concludes that to dialogue with educators AIED will need to move away from discussions of efficiency as measured by educational assessments and incorporate humanistic and social learning theories that embrace the complexities of human relationships. Developers should seek to work directly with educational leaders to establish optimal teaching strategies for the ethical ‘good’ of the learner, while attending to social justice parameters. Equally critical is the need to create ethical parameters between the AI and the student. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
100
Creation and Evaluation of a Pretertiary Artificial Intelligence (AI) CurriculumChiu T.K.F., Meng H., Chai C., King I., Wong S., Yam Y.,2021IEEE Transactions on EducationArticle10.1109/TE.2021.3085878
Contributions: The Chinese University of Hong Kong (CUHK)-Jockey Club AI for the Future Project (AI4Future) co-created the first pretertiary AI curriculum at the secondary school level for Hong Kong and evaluated its efficacy. This study added to the AI education community by introducing a new AI curriculum framework. The preposttest multifactors evaluation about students' perceptions of AI learning confirmed that the curriculum is effective in promoting AI learning. The teachers also confirmed the co-creation process enhanced their capacity to implement AI education. Background: AI4Future is a cross-sector project that engages five major partners--CUHK's Faculty of Engineering and Faculty of Education, secondary schools, Hong Kong government, and AI industry. A team of 14 professors collaborated with 17 principals and teachers from six secondary schools to co-create the curriculum. Research Questions: Would the curriculum significantly improve the student perceived competence, attitude, and motivation toward AI learning? How does the co-creation process benefit the implementation of the curriculum? Methodology: The participants were 335 students and eight teachers from the secondary schools. This study adopted a mix-method with quantitative data measures at pre- and post-questionnaires and qualitative data emphasizes teachers' perspectives on the co-creation process. Paired t-tests and ANCOVAs, and thematic analysis were used to analyze the data. Findings: 1) students perceived greater competence and developed a more positive attitude to learn AI and 2) the co-creation process enhanced teachers' knowledge in AI, as well as fostered teachers' autonomy in bringing the subject matter into their classrooms. CCBY