In this document you will find two extracts of the OLT funded Feedback for Learning project (feedbackforlearning.org)


A. Problem statement

Feedback (during and after) assessment tasks is critical for effectively promoting student learning. Without feedback students are limited in how they can make judgements as to their progress, and how they can change their future performance. Feedback is the lynchpin to students’ effective decision making, and the basis of improved learning outcomes. However, feedback is under-utilised and often misunderstood by both students and educators. This project is about improving student learning (and experience) through improving institutional, educator, and student capacity to stimulate and leverage assessment feedback.

The aim of this project is to improve student learning and experience by improving the way in which the Australian Higher Education sector enacts feedback. Our approach will deliver a pragmatic, empirically based framework of feedback designs to guide educators, academic developers and instructional designers, as well as institutional policy. This will be supported by large scale data highlighting patterns of success and 7 rich cases of feedback designs to demonstrate how that success can be achieved. In addition, this project will facilitate the likelihood of adoption through a series of dissemination activities including national workshops built on a participatory design approach.


B. Project outputs (or deliverables)

This project has been designed so that each of the above four challenges are aligned with four phases of project activity, each with specific deliverables. From the outset, issues of sustainability, impact and dissemination have been considered in the project design. This project is ambitious but achievable. The project team has (a) substantial expertise in the field of feedback, (b) considerable experience in conducting high-impact large-scale projects, and (c) have a track record of working together. The deliverables are:

Phase 1 [Challenge#1]: What are the current assessment feedback practices and which of these lead to improved student performance?

  1. Statistical data shared via the project website revealing patterns of feedback strategies and their reported strengths, weaknesses, and impact across diverse contexts (eg. programs and disciplines).
  2. At least one infographic based on the data from phase 1 to raise awareness of issues and patterns with higher education leaders, policy makers, and the media. The infographic is particularly geared to generating attention across the sector – raising the awareness of the issues as well as raising awareness of the project – foregrounding outcomes from the following phases. Disseminated via social media and by direct email to all Learning and Teaching departments across the Higher Eduction sector.
  3.  Media release (in negotiation with OLT)
  4. One conference presentation – goal of raising awareness of project and its emerging issues (the topic is dependent on the data – but an example might be the role of technology supported feedback approaches presented at ASCILITE)*

[* Note: due to the time frame of writing and peer review these publications are unlikely to fall within the phase period. However, journals/publishers with early release or open access options will be preferred. The other academic outputs in this section should be read in a similar way.]

  1. One journal article reporting on the quantitative and qualitative data – models of significance in feedback influences and challenges (e.g. Journal of Higher Education)*

Phase 2 [CHALLENGE #2]: Why are some forms of these practices successful?

  1. Website created to host project details, findings and resources as they are developed.
  2. 7 rich cases of feedback designs – with carefully designed supporting evidence in the form of professional videos, documentary evidence such as student impact and feedback examples. This is a significant deliverable. Each case will be actively disseminated via social media, email, etc.
  3.  One infographic/diagram publicising the cases of feedback designs and their resources – drawing attention to key characteristics / principles. This will include direct hyperlinks to the cases.
  4. One journal article or book chapter describing the systemic challenges arising from analysis of the 7 cases. This will support project dissemination and impact by demonstrating its findings are judged significant by peers (e.g., via double-blind peer-review HERD)*

Phase 3 [CHALLENGE #3]: How can we best design for effective feedback to promote learning?

  1. A (meta) framework of feedback designs. The final shape of this framework will be guided by the data collection and through negotiation with stakeholders (e.g., participants, leaders, reference group, and project evaluator). It is conceived as a guide for institutions and educators in the choice and planning of feedback within their ecologies. It will be made up of a series of questions that stakeholders need to ask, accompanied by potential ‘best practice’ options with actual examples. This is a significant deliverable and will require considerable development time.
  2.  An article in popular media of direct relevance to target audience, such as The Conversation
  3. One conference paper proposing the feedback framework (e.g. HERDSA)*
  4. One journal article built on the above conference paper (Assessment & Evaluation in HE)*

Phase 4 [CHALLENGE #4]: How can the circumstances of successful feedback for learning be replicated and sustained across and within Australian universities?

  1. Professional learning resources including PowerPoints, worksheets, synopses and links to specific examples within the rich cases of feedback designs – prepared specifically for academic developers and workshop facilitators and will also include instructions for self-guided professional learning.
  2. Six workshops with up to 180 participants in six capital cities actively designing how they can implement the project outcomes. This is a significant deliverable relating to impact & dissemination.
  3. Leadership webinar delivering project findings and policy implications to leaders from Australian universities. This webinar will also collect feedback from participants about insights regarding perceived opportunities, barriers and needed “next steps”.
  4. At least one article building on the workshops and leadership webinar, focusing on the systemic transferability/sustainability of the framework of feedback designs (e.g., Journal of Higher Education Policy and Management) *

In addition

  1. The final report, with executive summary and one page summary infographic

C. Project impact and dissemination

The following IMPEL key questions and matrix also include responses to the D-Cubed Model for dissemination. This section should be read in conjunction with section B in relation to specific outputs that also feed into the dissemination strategy.

1.   What indicators exist that there is a climate of readiness for change in relation to your intended project?

This response builds on the arguments around sector need and readiness in Section A. In brief, feedback is generally recognised as being potentially valuable but highly variable in quality and impact with resulting student dissatisfaction. As a consequence all Australian universities to our knowledge seek to improve their approach to feedback. Universities need a way in which they could systematically plan for effective feedback at the institutional, educator and student level. Examples of the voracity of the sector for empirically sound pragmatic approaches are included in Section A, including the fact that a single research paper by Henderson and Phillips resulted in invited presentations to 7 faculties/divisions and 5 other universities. Consequently we expect (and are experienced in leveraging) considerable interest from across the sector in the outcomes of this project.

2.   In brief and indicatively, what impacts (changes and benefits) do you expect your project to bring about, at the following levels and stages of the Impact Management Planning and Evaluation Ladder (IMPEL)?

A project of this size and national importance is driven by a desire to have an impact on the sector as a whole. As described above, there is a climate of readiness. However, the framework and other outcomes of this project are unlikely to provide simple single-step ‘fixes’ to the critical problem of feedback in Higher Education. Consequently, this project is designed to invest considerable time on working with stakeholders, particularly in the Phase 4 participatory design workshops (please see Question 3 of this section for more details about strategies to engage stakeholders). These are seen as being particularly powerful in supporting participants (academic developers, leaders in Learning and Teaching divisions and other key stakeholders) to plan how the framework can be implemented within their ecologies.

The below IMPEL matrix offers an indicative plan for impact

 

Anticipated changes (projected Impact) at:

 

Project completion

Six months post-completion

Twelve months post-completion

Twenty-four months post-completion

1.  Team members

·  Increased inter/national profiles through project activities – with increased attention on the project, and opportunities for invited presentations, papers, awards, policy advice, etc.

·  Improvements to own feedback practice

2.  Immediate students

·  Improved learning & satisfaction through enhanced feedback designs

·  Improved student capacity for seeking and acting on feedback.

·   Improved learning & satisfaction – via refined feedback designs.

[Also provides ongoing comparative data for project which supports ongoing impact through future dissemination]

3.  Spreading the word (Contributions to knowledge in the field)

·  Dissemination of outputs, including infographics, videos, and professional learning resources via social media, direct email, media coverage, etc.

·   2 articles published - peer recognition of validity of findings (see Outputs)

·   2 conference presentation - to expand influence (see outputs)

·   2 articles published - to demonstrate peer recognition of validity of findings (see Outputs)

·   Additional journal and conference paper arising from ongoing data analysis.

4.  Narrow opportunistic adoption (at participating institutions)

·  Adoption of framework and use of resources by educators, academic developers and leaders who participated in the study                 

·   Presentations to University & Faculty leading to adoption of practices and policy changes

·   Improved student learning and experience. Improved student capacity for seeking feedback and self-regulated learning.

5.  Narrow systemic adoption (at participating institutions)

·  Policy change in Faculty of Education, Monash [establishing feasibility]

·   Policy change within Faculties at Monash and Deakin.

·   Policy changes & funded initiatives at institution level at Monash and Deakin.

6.  Broad opportunistic adoption (at other institutions)

·  Changed practices of up to 180 academic developers, instructional designers and lecturers from all Australian universities who participated in the Phase 4 workshops across 6 capital cities.

·   Snowballing impact from the workshops – esp. driven by participants in academic development and leadership roles – leading to improved student learning and experience and improved student capacity for seeking feedback and self-regulated learning. This in turn feeds into the broad systemic adoption

·   Take-up of practices through dissemination of findings, eg. via conference presentations

7.  Broad systemic adoption (at other institutions)

·  Raising of awareness of framework and policy implications via the Phase 4 leadership webinar conducted with up to 80 leaders and managers from all 39 Australian Universities.

·   Raising awareness through reference group and media releases to peak bodies such as CADAD, ACODE, etc.

·   Policy changes and funded initiatives.

 

3.  What are your strategies for engaging with stakeholders throughout the project?

This project has been carefully designed so that dissemination strategies are an integral component of the activities and outputs of each Phase (please read the below in conjunction with Sections B and D).

Each phase of our project requires us to have substantial engagement with students, educators, professional staff, and senior leaders. These groups are key informants but also we see them as key audiences for our work. A portion of the budget has been assigned to support this engagement, for example incentives to participate in the survey, focus groups and case studies. The Phase 4 participatory design workshops and to a lesser extent the leadership webinar are designed to be a particularly effective mechanism to impact on the ‘change enablers’ across the nation.

In each Phase of the project we also intend to engage with the broader HE community (ie. beyond our project participants) through dissemination of our findings, particularly in ‘easy to digest’ forms of infographics, media releases, and “first glimpses of the data”:

  • direct email (as an outcome of Henderson’s previous OLT project, we have a database of 360 academic developers, Deans, Heads, PVC and Provosts of teaching and learning, directors/managers of T&L and related divisions – from all 39 Australian Universities).
  • social media updates (our collective social media streams have a current following of 2100 academics and developers from Australian and internationally)
  • engagement with popular media outlets (in consultation with the OLT). Our previous OLT projects demonstrate our success in this strategy with considerable media interest arising from the release of findings at each phase.

A further key strategy involves the reference group as well as the external evaluator, to further disseminate and draw attention to the project through their extensive networks.

Finally, the project team is high accomplished at, and deeply invested in, the dissemination of findings through their networks, in particular at their regular attendance at national and international conferences such as HERDSA, AARE, and ASCILITE. The team will produce at least two conference papers during the project and four journal articles/book chapters in relation to each phase.

4.  How will you enable transfer ensuring that your project remains impactful after the funding period?

Our outputs will be designed with reuse and adaptation in mind. We will ensure that the source documents for our work are readily available so they can be embedded in institutional and departmental programmes. University teachers, academic developers, committees, leaders, each have different needs and ecologies within which they work. The participatory design workshops are a particularly powerful strategy to both disseminate our findings while also providing a way in which the participants can be supported to adapt them to their own needs (with a further benefit of these outcomes ‘feeding back’ into the project outcomes to provide further examples of adaptations that other institutions could find helpful.

We will also provide ready-to-use and tested (an outcome of the 6 participatory design workshops) continuing professional development resources that can be used by individuals, teams and academic developers to build capacity to improve feedback.

5.  What barriers may exist to achieving change in your project?

The risk to this project is a continuing investment in old conceptions of feedback as only “giving comments to students”, with resistance to what may be perceived as time consuming and complex alternatives. However, this project is designed from the outset to address the issue of why feedback designs in one context may not be successfully adopted in another context. Phases 2-4 specifically seek information from students, educators, academic developers, managers and senior leaders about the conditions for success and the barriers that are involved at all levels. This project is designed to address the pragmatic need for effective feedback designs by raising awareness of the necessary conditions that need to be in place. We will have a particular focus on feedback designs that operate within standard learning and teaching workload models. It is important to recognise how we can work within (rather than ignore) constraints such as workload, budget, and time.

6.  How will you keep track of the project’s impact? What analytics may be useful?

We will record and track each engagement with the project participants and wider HE community enabling us to confidently measure our ‘immediate reach’. We will also track the volume of website visitors, as well as the views and downloads of specific resources (eg. video, infographic, and professional development PowerPoints). These are in addition to familiar analytics such as academic article downloads, citations, conference attendees, workshop attendees, invited presentations, media interviews, etc. We will also email all of the Phase 4 workshop participants two months after their participation seeking information about the traction of any outcomes (changes to policy, training, practices, plans, etc.). Tracking these metrics will be a part of the Project Manager’s role.

7.    How will you maintain relevant project materials for others to access after the project is completed?

Our website with its resources will be hosted for at least five years. It will be maintained and updated during that five year period by the Learning with New Media Research Group, under the supervision of A/Prof Henderson. The website will be updated with relevant academic articles and sector initiatives as they become known to the project team. In addition, the website will feature social media streams (eg. Twitter), which will facilitate the site as a ‘go to’ location for feedback related designs.