ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Record numberArticle titleAuthorYear of publicationCountry where study was conductedAim/purpose of the studyStudy samplingStudy population (e.g. Nursing students, sample size)Study designData collectionData analysisModelsFrameworksInstrumentsConstructsMethodsTheoriesValidation criteriaAssessment approach
2
1A model for communication skills assessment across the undergraduate curriculumELIZABETH A. RIDER, MARGARET M. HINRICHS & BETH A. LOWN2006United States of AmericaOur goal was to implement a uniform communicationskills assessment plan to reinforce basic skills introducedin years 1 and 2 and to elaborate on these skillsduring years 3 and 4, identifying appropriate skills forassessment at the different levels of training. We implemen-ted a uniform communication skills framework for assess-ment across all four years of undergraduate medicaleducation (pg e128)Not mentionedOur goal was to implement a uniform communicationskills assessment plan to reinforce basic skills introduced in years 1 and 2 and to elaborate on these skills during years 3 and 4, identifying appropriate skills for assessment at the different levels of training. We implemented a uniform communication skills framework for assessment across all four years of undergraduate medical education (pg e128)Not mentionedA core group adopted a set of seven communication competencies based on the Bayer–Fetzer Kalamazoo Consensus Statement.The Bayer–Fetzer Kalamazoo group identified seven broadly supported essential communication (pg e128). We also adapted the American Board of Internal Medicine (ABIM) patient satisfaction assessment tool competencies, with sub-competencies for each, applicable to most medical encounters and adaptable across specialties, settings and health issues. (pg e128) are described below. We also adapted the American Board of Internal Medicine (ABIM) patient satisfaction assessment tool (American Board of Internal Medicine, n.d.). We chose six items (five items in the first year) for our adapted tool including patient ratings of the interviewer’s greeting, respect, listening, showing interest, encouraging questions, and using simple language. Faculty examiners and/or standardized patients complete our adapted Kalamazoo assessment tool (i.e., the Harvard Medical School [HMS] Communication Skills Tool) and standardized patients complete our adapted ABIM patient satisfaction tool (Table 2) in assessment exercises at different stages over the four years of medical school. (pg e129). Table 3 shows our framework for uniform assessment of communication skills across the curriculum. We use the HMS Communication Skills Tool, adapted from the Kalamazoo assessment tool, and the adapted ABIM Patient Satisfaction Tool in school-wide assessment exercises across four years and in the core medicine clerkship required of all third-year students. (pg e131)Not mentionedNot mentionedConsensus was built regarding communication skill competencies by working with course leaders and examination directors, a uniform framework of competencies was selected to both teach and assess communication skills, and the framework was implemented across the Harvard Medical School undergraduate curriculum. The authors adapted an assessment framework based on the Bayer–Fetzer Kalamazoo Consensus Statement adapted a patient and added and satisfaction tool to bring patients’ perspectives into the assessment of the learners.(pg e127)
The original Kalamazoo assessment tool included 23 communication sub-competencies with possible ratings: done well, needs improvement, not done, not applicable. Global ratings on the seven core communication competencies were not included. We adapted the Kalamazoo assessment tool, choosing to use global ratings of the seven core-competencies using a Likert scale: 1¼poor, 2¼fair, 3¼good, 4¼very good, and 5¼excellent (Table 1). In years 1 and 2, we use global ratings on six core competencies,excluding reaching agreement. In year 3, we rate all seven core competencies as well as each sub-competency (a total of 30 ratings). In year 4, we again use global ratings on the seven core competencies (pg e128). We also adapted the American Board of Internal Medicine (ABIM) patient satisfaction assessment tool (American Board of Internal Medicine, n.d.). We chose six items (five items in the first year) for our adapted tool including patient ratings of the interviewer’s greeting, respect, listening, showing interest, encouraging questions, and using simple language. Faculty examiners and/or standardized patients complete our adapted Kalamazoo assessment tool (i.e., the Harvard Medical School [HMS] Communication Skills Tool) and standardized patients complete our adapted ABIM patient satisfaction tool (Table 2) in assessment exercises at different stages over the four years of medical school. 9pg 129)For first and second year: In the clinical assessment exercises, standardized patients portray cases that contain common biomedical and psychosocial problems. Students are assessed on their ability to elicit a complete history, including inquiring about the patient’s explanatory model and sensitive areas such as screening for smoking, substance abuse and domestic violence, and taking a sexual history. The HMS Communication Skills Tool is used for assessment and feedback. The year-long small-group format
allows students to develop supportive, mentoring relationshipswith faculty. The multiple opportunities for one-on-one observation and assessment with immediate feedback help students set personal goals, receive and use feedback and practice self-reflection, all of which are central to professiona development and improved communication skills. Core faculty and Patient–Doctor I and II course leaders selected six of the seven Kalamazoo competencies (excluding reaching agreement) to be used for assessments in the first two years. Course leaders grouped a detailed list of skills already used in the curriculum into the competency headings in the Kalamazoo format. Toward the end of the second year, students participate in an Objective Structured Clinical Examination (OSCE) consisting of seven stations, each with a 15-minute encounter with a standardized patient (SP) and five minutesof SP and faculty feedback. Standardized patients assess students’ communication skills in the seven stations using the HMS Communication Skills Tool (see Table 1) and complete the adapted ABIM Patient Satisfaction Tool (see Table 2). (pg 131). For third year: Faculty examiners assess and provide immediate feedback to each student, using an expanded HMS Communication Skills Tool, adapted from the Kalamazoo assessment tool. Using this expanded assessment tool, faculty assess students on the seven core communication competencies and 23 subcompetencies using a five-point Likert scale. Faculty also rate additional items related to this particular patient’s situation and case history. The standardized patients complete the HMS Communication Skills Tool and our adapted ABIM patient satisfaction assessment tool. (page e131). For part four: Students must pass a school-wide Comprehensive Clinical Practice Examination at the beginning of their fourth year. Students are assessed at nine clinical skill stations. Many of the stations are integrated across disciplines. For example, one station may integrate skills in medicine and neurology; another content and skills from surgery, obstetrics/gynecology and medicine. Standardized patients assess students’ communication skills in seven of nine clinical skill stations using the same HMS Communication Skills and ABIM assessment tools. Faculty assess students’ skills in interview content, physical diagnosis, differential diagnosis and management, and provide feedback on communication skills (pg e132)
Not mentionedNot mentionedNot mentioned
3
2A Model for Selecting Assessment Methods for Evaluating Medical Students in African Medical SchoolsAndrew Walubo, Vanessa Burch, Paresh Parmar, Deshandra Raidoo, Mariam Cassimjee, Rudy Onia, Francis Ofei2003South AfricaIn this article, using our experience as both trainees and trainers in Africa, we propose a standard approach
for selecting an assessment method for testing students’ performance in an African institution. (page 900)
Not mentionedThe OSCE was the most costly examination. With several stations and three examiners each, 150 students would take 900 MPH. More than one extra examination resource would (page 902)Not mentionedNot mentionedNot mentionedThe ideal model applies to any institution where finances are not so critically lacking as to impair the development of effective learning and assessment programs. (pg 903)NoneNone mentionedWe compared six assessment methods: essay examination,
short-answer questions (SAQ), multiple-choice questions
(MCQ), patient clinical examination (PCE), problem-based
oral examination (POE), and objective structured clinical
examination (OSCE) for their abilities to test for students’
performance and their ease of adoption with regard to cost,
suitability, and safety. Each of these factors is described
below, as are the rating scales we used to evaluate them. (page 900)
None
6
3Application of ‘‘Earl’s Assessment as, Assessment for, and Assessment of Learning Model’’ with Orthopaedic Assessment Clinical CompetenceMark R. Lafave, Larry Katz, Norman Vaughn, Calgary, Alberta2013CanadaThe purpose of our paper is to introduce a theoretical framework, the predictive learning assessment model (PLAM) (pg 110). Our
study is another in a series of studies to establish the validity and reliability of the SOAT, because it takes a number of studies to establish the overall validity of an assessment tool. (page pg 110)
We selected a convenience sample of three educational institutions due to the similarity in undergraduate athletic therapy program curricular design: Concordia University, Mount Royal University, and the University of Winnipeg. Three instructors of an introductory orthopaedic assessment class in athletic therapy curricula were solicited to participate in the study. The instructors for the orthopaedic assessment classes (page 110). solicited students after the final grades for the introductory class had been finalized. All three instructors had been teaching the course for at least 3 years at their home institution. All instructors were Certified Athletic Therapists in Canada. Two instructors had a PhD and one had a Master’s of Science academic credential. There were 57 third-year students who volunteered for our study across the three athletic therapy programs: Concordia University (n ¼ 24); Mount Royal University (n ¼ 24); University of Winnipeg (n ¼ 9).Four types of participants were needed for our study: educational institutions, instructors, students, and examiners. There were two types of examiners: raters and SPs. Selection
rationale for these participants is outlined separately. We selected a convenience sample of three educational institutions due to the similarity in undergraduate athletic therapy
program curricular design: Concordia University, Mount Royal University, and the University of Winnipeg. Three instructors of an introductory orthopaedic assessment class in athletic therapy curricula were solicited to participate in the study. The instructors for the orthopaedic assessment classes (page 110). solicited students after the final grades for the introductory class had been finalized. All three instructors had been teaching the course for at least 3 years at their home institution. All instructors were Certified Athletic Therapists in Canada. Two instructors had a PhD and one had a Master’s of Science academic credential. There were 57 third-year students who volunteered for our study across the three athletic therapy programs: Concordia University (n ¼ 24); Mount Royal University (n ¼ 24); University of Winnipeg (n ¼ 9). The Human Research Ethics Board at all three institutions approved our study. There were two types of examiner participants in this study: SPs and raters. The primary investigator (M.L. from Mount Royal University) acted as the SP for all examinations (n¼57) to ensure there was consistency in the acting for each scenario across multiple institutions. Raters were solicited through an e-mail distribution and call for volunteers 3 weeks before testing. Raters were required to have practiced athletic therapy for at least 5 years and have had past experience testing students at the university undergraduate level and at the national examination level. The final raters who participated in our study were chosen based on availability for the testing dates once all the baseline requirements were met. There were five raters from Mount Royal University, five raters from University of Winnipeg, and two raters from Concordia University. (page 111)
If the SOAT is able to discriminate among various quasi-experimental groups, it is thought to possess construct validity. (page 110)All instructors agreed to participate knowing they may have to change their curriculum delivery based on being assigned into one of three groups outlined herein. Those instructors who were part of groups 2 and 3 (Table 1) were oriented to the SOAT so they were familiar with the content, its functionality, and its use in the final examination of students in the Spring 2007. The instructor for group 3 was permitted to copy and distribute the SOAT to students at will throughout the course of the Fall 2006 semester. The group 3 instructor was permitted to use the SOAT in a final, summative examination at the end of the class in December 2006. The instructor for group 2 was not permitted to copy or distribute the SOAT in any way but was permitted to apply the principles embedded in it throughout the course of the semester. The group 2 instructor was not permitted to use the SOAT in the final, summative examination. The instructors were asked not to
solicit students into the study until the testing phase of the study in the Spring 2007 semester to ensure there was no bias or coercion of the students participating in the study. (page 112)
All SOAT test scores for both knee and shoulder scenarios from all three groups were combined to calculate a Cronbach a reliability coefficient as a measure of internal consistency. We completed a one-way analysis of variance (ANOVA) to determine whether a difference existed between the comparison group and the two quasi-experimental groups. A post hoc analysis was conducted once the ANOVA was complete. The statistical analysis was calculated using SPSS 17.0 (SPSS Inc, Chicago,IL). (pa112)The predictive learning assessment model (PLAM) (page 110)The theoretical framework that underpins the PLAM is Earl’s model of learning (page 110)The Standardized Orthopedic Assessment Tool (SOAT) is an evaluation tool that has been developed for both summative and formative assessment of orthopaedic assessment clinical competence and is used in OSCE-type examinations. There are slight variations of the testing procedure using the SOAT compared with OSCEs, but both are practical, performance based examinations. (page 110)NoneNoneThe theoretical framework that underpins the PLAM is Earl’s model of learning: assessment as, for, and of learningNoneNone
7
4Assessing competence in undergraduate nursing students: The Amalgamated Students Assessment in Practice modelMark F. Zasadny, Rosalind M. Bull2015AustraliaThere was a clear need to develop a model that was consistent with national accreditation guidelines, supported students' learning, was relevant to practice and was able to be applied reliably by multiple clinical facilitators across a wide variety of practice contexts. (pg 128). Higher education budget cuts, stringent performance measures and ‘massification’ have seen increased diversity in applicants and enrolments into nursing courses in Australia accompanied by equally varied student capabilities. These factors in the context of the regulatory framework, the burgeoning quality and safety agenda and pressure on clinical placements have highlighted the need for rigorous, consistent and accessible mechanisms for inpractice assessment of competence from the very beginning of pre-registration nurse education. This paper is presented in three parts. The first provides a critique of competence and its assessment; the second introduces the Amalgamated Student Assessment in Practice (ASAP) model and tool, describing its development and purpose; and the third reports on the preliminary trial of the ASAP model and tool in the final year of a Bachelor of Nursing program. The complex nature of competence and its assessment are discussed as a backdrop to the development of the model and tool. Within this paper the term competence is used as it relates to the ability to practice in a manner reflecting the Competency Standards for Registered Nurses (page 126)NoneThe model was implemented for 225 final year nursing students
undertaking professional experience placement in the 7
participating hospitals. All students were supported by a clinical
facilitator at a ratio of one facilitator to eight students and
twenty three (23) Clinical Facilitators participated in the trial.
The Clinical Facilitators are either employees of the University or
partner organisations covered by a formal work integrated
learning (WIL) agreement. (page 131)
Not mentionedStudent feedback was gathered during practice and with the formal university student evaluation tool eVALUate which enables collection of both quantitative and qualitative data using a 5-point rating scale ‘strongly agree to strongly disagree’ and open comment format respectively. (pg 132). strengths and address critical weaknesses of existing clinical assessment practices. ASAP is an acronym for Amalgamated Student Assessment in Practice but also reflects the more widely recognised meaning 'as soon as possible '. This is appropriate and fitting as the ASAP model functions as an early intervention strategy that is tailor-made to each particular individual who requires a redirection within clinical practice. The model comprises an assessment tool, a clinical reasoning framework and a negotiated learning contract. The tool which amalgamates the ASK (attitude, skills and knowledge) and SEP (safe, effective and proficient) criteria is used in combination
with a clinical reasoning framework which highlights such areas as information collection and processing, problem identification, patient centred action and reflection (page 128)
Student feedback was gathered during practice and with the
formal university student evaluation tool eVALUate which enables
collection of both quantitative and qualitative data using a 5-point
rating scale ‘strongly agree to strongly disagree’ and open
comment format respectively. Additional descriptive data
measuring ASAP's impact on student outcomes was collected
through a review of student pass rates and analysed descriptively
as percentages. Qualitative data was organised into four emerging
themes.
Findings
The findings are presented under the four related uses of the
model and tool in practice (i) assessment; (ii) focussed diagnosis;
(iii) removal from PEP; and (iv) a structure for the documentation of
evidence in conjunction with its ease of use and transferability
within any practice setting. (pg 132)
The ASAP model is designed for use at all stages of students' practice experience and is built on the assumption that competence
for undergraduate nursing students is a formative, transitional process that ultimately culminates in a summative assessment of
competence. (page 128). ASAP is an acronym for Amalgamated Student Assessment in
Practice but also reflects the more widely recognised meaning 'as
soon as possible '. This is appropriate and fitting as the ASAP model
functions as an early intervention strategy that is tailor-made to each
particular individual who requires a redirection within clinical
practice. The model comprises an assessment tool, a clinical
reasoning framework and a negotiated learning contract. The tool
which amalgamates the ASK (attitude, skills and knowledge) and SEP (safe, effective and proficient) criteria is used in combination
with a clinical reasoning framework which highlights such areas as
information collection and processing, problem identification, patient
centred action and reflection (page 128)
The ASAP model functioned effectively as an assessment tool, focussed diagnostic tool, removal from Professional Experience Placement (PEP) support tool and a framework for documenting evidence. (page 126). The clinical reasoning framework is embedded across the model and the Clinical Setting provides the context for the model to function within (page 128)The model comprises an assessment tool, a clinical reasoning framework and a negotiated learning contract. The tool which amalgamates the ASK (attitude, skills and knowledge) and SEP (safe, effective and proficient) criteria is used in combination
with a clinical reasoning framework which highlights such areas as information collection and processing, problem identification, patient-centred action and reflection (page 128)
NoneFacilitators were provided with an explanation of how the tool and clinical reasoning were used a simulated role play of particular scenarios that demonstrated the tool's functionality followed with a subsequent debrief session focussing on the application of the model's components relevant to the scenarios that had been role played. (page 131)NoneNoneNone
8
5Assessing Nurse Graduate Leadership Outcomes The “Typical Day” FormatJeanne Wissmann, Barb Hauck, Julie Clawson,2002United States of AmericaThe 1990s brought to Central Missouri State University (CMSU) (and other colleges and universities) a concentrated focus on identification, development, and assessment of student and program outcomes. Demonstration of nursing graduates’ readiness for professional nursing practice was and continues to be the driving force behind outcome development and assessment efforts. Two underlying values are integral to the leadership outcome assessment model developed at CMSU’s Department of Nursing with the generous support of a Helene Fuld Health Trust grant. The first value is one of collaboration among nurse educators and nurse practice leaders. The second value is one of engaging in external assessment of student performance. (page 32). At CMSU, our development and
implementation of a model of leadership
outcome assessment consisted of
five phases.
After a full year of grant activity, we held our first nurse leadership assessment day. Nurse practice leaders and senior nursing students were welcomed and an overview of the day’s process was given. Two nurse practice leaders were assigned to each senior
student. Room assignments were distributed.
Rooms were arranged for comfortable communication between assessors and student. Each room was equipped with client charts and materials needed for the assessment process. (pg 35)
determinants in our decision to use a
simulation format.
Our typical day format was developed
to include nine “typical day”
activities (Figure 3). Each of these
“typical day” activities provides opportunities
for demonstration and assessment
of leadership outcomes (Figure
2). An assessment tool was designed
with specific criteria outlined
for each activity.
Concurrently, we developed simulated
“typical” client caseloads and
client data sets that were authentic to
practice yet controlled enough to
allow for assessment of leadership
outcomes within time and other limitations.
In designing the caseloads of
clients, our goal was to create typical
caseloads, comparable to nursing
practice caseloads. We agreed that
each caseload would include six
clients, diverse in complexity of care,
nursing intervention needs, and in
health alterations. A chart that contained
basic data for planning priorities
and nursing care was developed (pg 34)
An analysis of the assessment tools completed by the external assessors and the self-assessment tools completed by the participating students provided assurance that our soon to be graduate nurses could demonstrate the leadership capabilities needed for practice in the current
healthcare environment. Analysis also provided the students with feedback for development of their own plan for continued learning and faculty with direction for continuous curricular improvement. (page 36)
Leadership outcome assessment model. (pg 36)Students were given caseloads that had a maximum of six clients with diversity of care needs
10
6Assessment in First Year University: A Model to Manage TransitionJ. A. Taylor2008AustraliaWithin this climate this paper proposes a model for effective assessment in first year university, positioned within research findings on assessment in higher education and transition to university. The model has been synthesised from a range of effective practices offered in diverse first year courses offered at the University of Southern Queensland, a regional multi-modal Australian university, in which eighty two percent of its 26 000 students study by distance education. (page 21)The model (Figure 1) has been synthesised from the practice of the author in a large first year mathematics course and from colleagues within engineering, surveying, nursing, communication and computing. All courses are core courses within their relevant programs of study, are offered in the first semester of first year and enrol large numbers of students,
usually by both distance and on-campus education. (page 22)
The model (Figure 1) has been synthesised from the practice of the author in a large first
year mathematics course and from colleagues within engineering, surveying, nursing, communication and computing. All courses are core courses within their relevant programs of study, are offered in the first semester of first year and enrol large numbers of students,
usually by both distance and on-campus education. (page 22) (No name)
To successfully negotiate
their first semester they need to encompass a wide range of literacies and competencies, but
the first task is for them to engage with the course and thence manage themselves
throughout its progress. Traditionally, on-campus students can be engaged easily through
classes, but for the growing numbers of students who no longer attend lectures (Dolnicar,
2005) and for distance students, engagement can be slow. These early assessments can
take a number of forms, however to be valued by the student they should contribute a small
percentage to the final grade. In this sense they are both summative and formative.
Assessments for transition invariably encourage students to look both backwards and
forwards, by reflecting on past performance or behaviours, or by preparing a study plan for (page 23) the semester. Often they will involve a pre-test or self-audit to refresh prerequisite skills, or a
survey to assist students in understanding their learning skills. In some instances, they could
be contracts to awaken students to specific needs of a course e.g. regular internet access,
compulsory online discussions or to question students’ understanding of what is required to
complete a course. (page 24). Once engagement is established by the early assessments, the task of the middle assessments is to maintain the engagement and develop and confirm students’ skills and knowledge. These assignments aim to develop skills necessary for later success and have strong links with assignments
designated ‘assessment for achievement’; feeding forward into these assignments. To achieve this, these assignments should have significant marking time dedicated to provision
of timely feedback. The closer to the commencement of the semester these assignments occur then the lower their contribution to the final grade. But in all cases the resources allocated to marking should be relatively high. The assignment(s) could take variety of forms:
a draft for a later assignment, a reflective reading log, components of a portfolio, laboratory reports or online discussion group submissions. (page 27). Assessments for achievement, This type of assessment is best known in higher education. They include examinations, as
well as major essays, final portfolios, reports or projects. In most instances, this type of assessment occurs towards the end of the course, usually with a relatively high weighting. (page 28)
11
7Competency-Based Assessment in Pediatrics for the New Undergraduate curriculumPiyush Gupta, Dheeraj Shah, & Tejinder Singh2021IndiaWe present an overview of assessment guidelines for the subject of pediatrics by erstwhile Medical Council of India and propose a model for competency-based assessment. We have refrained from any critique of these guidelines. Both internal and summative assessments (page 775)Final year MBBS students (pg 778)A major issue with internal assessment has been ‘subjectivity’ and ‘bias’, which prevents us from makingits full use. An earlier proposed quarter model [4]
provides useful guidelines. Similarly, many components
of programmatic assessment (PA) [10] can also be
incorporated, like utility of assessment [11] rather than
attributes of individual tool or assessment, and using
every assessment to provide liberal feedback to the
students. (page 776)
MCQs, structured short and essay questions, case studies, OSCEs and viva-voce (pg 778)An earlier proposed quarter model provides useful guidelines. Similarly, many components of programmatic assessment (PA) can also be incorporated, like utility of assessment rather than attributes of individual tool or assessment, and using every assessment to provide liberal feedback to the students. (page 777)
12
8Evaluating the impact of moving from discipline-based to integrated assessmentHudson, J.N. & Tonkin, A.L.2004AustraliaThe working party defined the aim of this assessment as follows: to objectively test the ability of students to apply their knowledge of the basic science and skills that underpin clinical medicine to the practice of medicine in an integrated, non-discipline based examination format, at a level appropriate for third year medical students. This aim was subserved by a series of objectives (Table 1) that formed a blueprint of the skills that were required for a third year student to progress into the more clinical years of training. (page 833)This aim was subserved by a series of objectives (Table 1) that formed a blueprint of the skills that were required for a third year student to progress into the more clinical years of training.(page 834).In 2001, a formal evaluation of the IPE was conducted, with quantitative and qualitative data sought from both cohorts of students. This occurred 3 and
15 months post-examination, for the now fourth and fifth years, respectively. As the students were now distributed in clinical placements at various sites,
evaluation forms and the information needed for informed consent were mailed to them at their home addresses. Responses were returned to an independent administrator in the stamped, addressed envelope included in the mail-out. Although forms were numbered so that reminders could be sent to initial non-responders, students were reassured that the independent administrator would preserve the anonymity of their responses. Students were invited to make value judgements (on a Likert scale of 1–6, where 1 ¼ strongly disagree and 6 ¼ strongly agree) in relation to 4 positive statements about the integrated practical examination. Feedback data was gathered in relation to the following 4 statements: 1 the IPE encouraged me to integrate rather than compartmentalise  my learning; 2 the IPE tested my ability to apply my knowledge and skills in an integrated manner, using aspects of knowledge from more than 1 discipline; 3 the IPE discouraged me from rote learning and then dumping details after each examination, as can happen with separate discipline examinations, and 4 the IPE was a useful assessment at the end of third year for feedback on the style of learning I needed to adopt for successful achievement in later integrated assessment and ongoing medical practice. Qualitative data came from students’ comments in response to the following 5 open questions about the practical examination: 1 To what degree was the aim of the IPE met? 2 What particular aspect(s) of the IPE was useful to you as a learning experience? 3 What particular aspect(s) of the IPE was useful to you as an assessment experience? 4 How may the IPE be improved as a learning tool? 5 How may the IPE be improved as an assessment tool? (page 838)
Three raters, working independently, were used to
improve the reliability of the final content analysis of
student comments. Any non-relevant feedback, for
example, answers applying to the integrated written
papers, was excluded from the data. (page 838)
The IPE was a multistation, objective, structured examination conducted in the OSCE format familiar to many readers. Its novel feature was that stations integrated clinical, basic and human sciences with clinical practice. Stations had different starting cues, reflecting the varied experiences of clinical practice. For example, cues ranged from presenting symptoms, from which hypothesis generation was required, to visual cues such as videos where clinical signs were to be recognised, and real laboratory results which required interpretation. (page 835)
13
9Frameworks for learner assessment in medicine: AMEE Guide No. 78LOUIS PANGARO & OLLE TEN CATE2013United States of America and NetherlandsIn this AMEE Guide, we make a distinction between analytic, synthetic, and developmental frameworks. Analytic frameworks deconstruct competence into individual pieces, to evaluate each separately. Synthetic frameworks attempt to view competence holistically, focusing evaluation on the performance in real-world activities. Developmental frameworks focus on stages of, or milestones, in the progression toward competence. Most frameworks have one predominant perspective; some have a hybrid nature. (page e1197)NoneNoneNoneNoneNoneThe RIME model (Pangaro 1999) is an example of a synthetic framework. It was designed to describe minimum
expectation levels of medical students in the setting of their
clerkships (or attachments) in the clinical workplace. The model describes levels of function in the clinical setting:
(1) Reporter, (2) Interpreter, (3) Manager, and (4) Educator
(Table 1). (page e1205)
Analytic frameworks e.g. Bloom's taxonomy; Canadian Medical Education Directions for Specialists (CanMEDS) Framework; The Accreditation Council for Graduate Medical Education framework(ACGME) page e1202); Developmental frameworks e.g. Dreyfus Developmental Framework (Page e1206); Synthetic frameworks e.g. The Reporter-Interpreter-Manager-Educator (RIME) framework.(page e1204); Miller's Pyramid; page e1199
14
10Implementing Systematic Faculty Development to Support an EPA-Based Program of Assessment: Strategies, Outcomes and Lessons LearnedMegan J. Bray , Elizabeth B. Bradley , James R. Martindale & Maryellen E.Gusic2020United States of AmericaWe illustrate our faculty development for assessors to prepare them to engage in the University of Virginia (UVA) School of Medicine’s Entrustable Professional Activities Program.We describe the activities used to address the four dimensions for faculty development defined above.17 Further, we elucidate the Organizational, Competency and Leadership drivers,20 processes that enabled an institution-wide approach to faculty development to prepare assessors to use a new method to assess learner performance
in the UVA EPA program. Importantly, faculty development was implemented in advance of implementation of the EPA program. We present data from EPA assessments done by assessors during the first year theEPA program as outcomes to demonstrate the effectiveness of our faculty development efforts. (page 2)
Assessors in the program include residents/ fellows who work closely with students, faculty with discipline-specific expertise and a group of experienced clinicians who were selected to serve as experts in competency-based EPA assessments, the Master Assessors (page 1)The iCAN assessment tool (page 3) During the session, educational
technologists send participants a mock assessment using the web-enabled tool, iCAN, created by application developers at our institution. Students use this institutional tool to request assessments and assessors access the tool within our learning management system to complete the requested assessments. As participants watch and discuss the recordings of standardized students engaging in encounters with standardized patients, they practice observation skills23,35 and articulate their decision-making process as they apply the performance expectations to provide feedback to the standardized student and to assign supervision ratings using the iCAN assessment tool in real time. (page 3)
Not clearly stated (page 4)Favreau et al. outlined a model for training to support faculty who engage in entrustment decision making within an EPA-based program of assessment.
Their model outlines four dimensions to guide the creation of undergraduate faculty development initiatives. More specifically, faculty development must address skills in: observation and workplace assessment (Dimension 1);
feedback and coaching (Dimension 2); self-assessment and reflective practice (Dimension 3); and attention to building a community of practice among participants (Dimension 4) (page 2)
The introduction of Entrustable Professional Activities (EPAs) as a framework for assessment has underscored the need to create a structured plan to prepare assessors to engage in a new paradigm of assessment. (page 1)We sought to empower participants to apply new assessment behaviors as a part of a new institutional program of competency-based assessment (page 5) ????
15
11Observer-Reporter-Interpreter-Manager-Educator (ORIME) Framework to Guide Formative Assessment of Medical StudentsKum Ying Tham2013SingaporeReport on an interim outcome-based assessment framework (page 605).
The ORIME framework is proposed to assess a student’s ability to synthesise knowledge, skills and attitude during a clinical encounter with a patient. It is intuitive and self-explanatory, and it is hoped that “simplicity leads to acceptance; acceptance leads to use; use leads to consistency, and consistency is an important element of fairness”.4 To provide a common vocabulary and align expectations, the Minnesota Complexity scale is proposed as a guide to ensure case complexity is commensurate with student’s seniority and competence. (page 606)
The ORIME framework is in its final stages of approval before implementation in LKC Medicine. What are the implications for clinical teachers and students? Figure 1 provides an illustration for a low complexity case of Mr
Liew, who is 73-years-old and complains of intermittent chest pain and breathlessness. He has diabetes mellitus and hypertension that are well controlled. He is retired and has
no family, social and financial concerns. A Year 2 is able to greet Mr Liew, start a conversation and listen actively.
A Year 3 gives a reliable report on Mr Liew’s history and physical examination findings. A Year 4 is able to interpret these findings and results from laboratory and radiological
investigations to formulate differential diagnoses. A Year 5 manages Mr Liew according to local practice and prioritises
Mr Liew appropriately among several patients.
In the search for interim outcomes framework, we were attracted by the Reporter-Interpreter-Manager-Educator (RIME) framework (Table 1) that has been used by the National Healthcare Group (NHG) Residency for formative assessment of residents since 2010. With NHG as LKCMedicine’s principal clinical partner, adoption and
expansion of RIME framework for undergraduate interim
outcomes followed naturally. The use of RIME framework
in medical schools is not new because in an US survey, 37
out of 109 Internal Medicine clerkship directors use this
framework as an assessment method for medical students.(page 603)
Among several tools to assess complexity, the Minnesota Complexity Assessment Tool is most intuitive with face validity that provides accessible definition of complexity beyond clinical domain (Table 2). Domain E “Resources for Care” with an emphasis on language is pertinent in multi-lingual Singapore. Clinical teachers are not required to complete a formal complexity assessment before assigning a patient to the students but with a quick glance/recall of the high complexity items, set appropriate expectations for Year 3 versus Year 4 and 5 students. (page 605)
16
12Student perspectives of assessment by TEMM model in physiologyReem Rachel Abraham, Subramanya Upadhya, Sharmila Torke, and K. Ramnarayan2005IndiaIn this paper, we describe each of Triple Jump Test, Essay incorporating critical thinking questions, Multi station Integrated Practical Examination and Multiple choice questions (TEMM) model according to their rankings that they received with respect to seven items. (page 94)The TEMM model was incorporated as the assessment tool for 30 refresher students in the entire fourth block of the 2002 academic year. Refresher students are those who were unsuccessful in their final examination in the first chance and
therefore were made to repeat the training again for a period of 6 mo. Their class was small (n   30). The first three blocks were taught to them in the traditional method, consisting of didactic lectures and practical sessions. (page 94)
At the end of the block, students were given a questionnaire containing three parts. The first part was an openended question regarding whether they liked to be assessed by one type of assessment or more than one type. The second part was to state the reason for their response. The third part was to rank the different assessment methods with respect to seven items from highest to lowest (1, highest; 2, high; 3, low; 4, lowest) (page 95)The TEMM model was incorporated as the assessment tool for 30 refresher students in the entire fourth block of the 2002 academic year. (page 94At the end of the block, students were given a questionnaire containing three parts. The first part was an openended question regarding whether they liked to be assessed by one type of assessment or more than one type. The second part was to state the reason for their response. The third part was to rank the different assessment methods with respect to seven items from highest to lowest (1, highest; 2, high; 3, low; 4, lowest) (page 95)In Melaka Manipal Medical College, (Manipal Campus), Manipal, India, the TEMM model (consisting of 4 assessment methods: Triple Jump Test, essay incorporating critical thinking questions, Multistation Integrated Practical Examination, and multiple choice questions) was introduced to 30 refresher students in the fourth block of the academic year. (page 94)
17
13The Quarter Model: A Proposed Approach for In-training Assessment of Undergraduate Students in Indian Medical SchoolsTejinder, S. Anshu and Jyoti N. Modi 2012IndiaIn this paper, we propose a model for internal assessment, which tries to overcome some of the issues that teachers and students face. We call it the ‘in-training assessment (ITA) program’ as it reflects the philosophy and intent of this assessment better. (pg 872)The given sample formats have been drafted using the prescribed number of teaching staff for an institution admitting a batch of 100 students in a year. Utilization of
end-of-posting assessment for the practical component of
ITA in clinical subjects may contribute towards time efficiency of the ITA program by using same assessments
for formative as well as summative purposes. (page 873)
The Quarter model of in-training assessment (pg 873). In this paper, we propose a model for internal assessment, which tries to overcome some of the issues that teachers and students face. We call it the ‘in-training assessment (ITA) program’ as it reflects the philosophy and intent of this assessment better (page 872). The planning and assessment for ITA should involve all teachers of each department to ensure that no single teacher contributes more than 25% of the marks to the total marks and no single assessment tool contributes more than 25% marks to the total ITA. (page 873)To allow greater spread of marks, each subject may be assessed out of a maximum of 100 marks (50% for theory and 50% for practical/clinical component) in the ITA. ITA should make use of a number of assessment tools. For theory: questions, short answer questions (SAQ), multiple choice questions (MCQ), extended matching questions and oral examinations should be used. For practical/clinical assessment: experiments, long cases, short cases, spots, objective structured practical/clinical examinations (OSPE/OSCE), mini-clinical evaluation exercise (mini-CEX) and objective structured long examination record (OSLER) (pg 873)(For example, an assessment which is apparently low on reliability can still be useful by virtue for its positive educational impact. Where combinations of different assessments alleviate drawbacks of individual methods, use of the programmatic approach to assessment is advocated, thereby rendering the total more than the sum of its parts (page 872)
18
14Three key issues for determining competence in a system of assessmentJorie M. Colbert-Getz & Judy A. Shea2020United States of AmericaWe outline three key issues to debate as one begins the journey to competency-based assessment and conclude with issues still to be tackled. (pg 1)One possibility that fits comfortably with a system of assessment framework is to organize assessments around a competency based medical education (CBME) model. (page 1)A system suggests there is not a singular way to do assessment but rather ways to design and deliver assessments that have features such as being multi-source, multimethods, multi-purpose, and at least in Graduate Medical Education and Continuing Professional Development arenas, are heavily practice based. The specific of how assessments are conceptualized and designed is purposefully not defined in a system. One possibility that fits comfortably with a system of assessment framework is to organize assessments around a competency based medical education (CBME) model.However, most other competencies are measured with multiple assessment types. For example, patient care may be measured with objective structure clinical examinations (OSCEs) scored on a 0–100% scale and preceptor rating forms scored on a 5-point scale from 0 to 4 so scores/ratings from both assessment types will need to be combined in some way to determine if learners achieve the competency (page 2)We think the key issues are useful for situating a standard setting discussion, especially when retroactively changing existing curricula and assessments to a competency-based approach, but likely also when one has the opportunity to start from scratch and build an assessment system. (page 1)
19
15Validity Evidence for Assessing Entrustable Professional Activities During Undergraduate Medical EducationClaudio Violato, Michael J. Cullen, Robert Englander, Katherine E. Murray, Patricia M. Hobday, Emily Borman-Shoap
and Ozge Ersan
2021United States of AmericaThe main purpose of the present study was to report preliminary validity evidence in using the Core EPAs as a framework for assessment (pg S71)Not clearThis was a multiyear study. The data come from assessments generated by 14 students in 4 cohorts, who participated in the EPAC program during academic year (AY) 2014–2015 through AY 2018–2019 (the numbers of students from the 4 cohorts were 4, 3, 3, and 4, respectively). (page S71) The EPAC longitudinal integrated clerkship program takes place during the third year of medical school. Students begin in June and continue until they have reached entrustment on the 13 Core EPAs, where entrustment is defined as the ability to perform the Core EPA with indirect supervision, with the supervisor checking all findings. At that time, they are able to progress to a transition phase of their education before the transition to residency. For monitoring the students’ growth on the EPAs, performance assessments were conducted on the 13 Core EPAs described in Table 1. Assessors (faculty and residents) rated the students on a scale from 1 to 9 (see Table 2) adapted from a supervision scale published by Chen et al. 22,23 (page S71)Accordingly, we employed parametric analyses with the entrustment rating scale. Data were plotted over time, and curves were fitted theoretically based on regression coefficients derived from hierarchical regression analyses (page S71) EPAs provide a framework for assessment in competency-based medical education by requiring the integration of multiple clinical competencies in the authentic clinical environment (page S70)Students begin in June and continue until they have reached entrustment on the 13 Core EPAs, where entrustment is defined as the ability to perform the Core EPA with indirect supervision, with the supervisor
checking all findings. (page S71)
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103