Student opinion surveys typically place assessment and feedback as the least successful aspects of our courses. This is still the case after more than 20 years of institutional efforts to address the problem. Doing more of the same does not work. How then can we rethink feedback so that it can become much more effective without adding to the time taken to do it? Feedback is normally thought of as helpful comments provided to students. However, it can be more useful to think of feedback as a process in which students have an active role that leads to improved learning. The workshop will explore new ways to think about feedback and how we can design it into courses rather than see it as merely an adjunct to marking or grading.
Academic leaders play a key role in developing and sustaining cultures of teaching excellence through specific leadership activities1 including: recognizing and rewarding excellent teaching and teaching development; and, thoughtfully identifying teaching problems and turning them into opportunities. These leadership activities are strongly supported through both coaching and mentoring conversations with academic staff. In this workshop we will focus on how feedback can provide entry points for coaching and mentoring conversations on teaching practice and development with academic staff. We will also explore how growth-oriented feedback can be responsibly utilized in performance assessment. We will use case studies in this workshop to illustrate the real context and complexity of coaching, mentoring and performance assessment of teaching practice in higher education.
1. Gibbs, G., Knapper, C., & Piccinin, S. (2008). Disciplinary and contextually appropriate approaches to leadership of teaching in research-intensive academic departments in higher education. Higher Education Quarterly, 62, 416–436.
As we develop our courses to meet the new demands facing graduates in the 21st century, we often meet a constraint imposed by assumptions about assessment—what it should do and how it should be conducted—left over from an earlier era. Being assessed has a powerful influence on students. While it can be a positive experience, too often it is negative; students study in unproductive ways and do not focus on the outcomes that may be most important. This presentation starts with this challenge and explores ways in which assessment interacts with learning. It will discuss how we can think differently and more productively about the role of assessment in our courses. It locates consideration of assessment practices within the global agenda emphasizing standards and criteria in higher education and their representation in learning outcomes. A particular focus will be on avoiding de-skilling students through an excessive focus on making unilateral judgements of them, and on ensuring we invest our own assessment time productively.
In this workshop, participants will learn about the assessment strategies we have applied to evaluate our student academic success program over the past twelve years, and will be invited to reflect on the considerations and challenges of assessing such activities. SFU’s Academic Enhancement Program (AEP) is a collaborative program developed and delivered by the School of Computing Science (CS) and the Student Learning Commons (SLC). Its goal is to help students succeed in their studies and enhance their overall academic experience by incorporating workshops on ‘learning about learning,’ within their core first-year CS courses. Workshops and self-reflection activities are tailored to the courses where they take place, and students receive points for their participation, thus making the program an integral part of the course. We will present key characteristics of the AEP, lead participants through selected activities included in our workshops, and share some of the assessments that we have undertaken and their implications. This session will be of interest to anyone in an instructional role who is exploring the incorporation of student success initiatives in their teaching, as well as considering ways of assessing such activities.
The two‐stage exam process involves students completing an exam individually, and then completing the multiple¬choice portion of the exam collaboratively in groups of 4‐5. The collaborative process of converging on a single answer for each question for the group response promotes learning of concepts (e.g., Gilley & Clarkston, 2014) and students’ responses to the process are favourable (e.g., Weiman, Rieger, & Heiner, 2014). In a 2017/2018 study of two Psychology courses and one Geography course, we examined students’ (N = 106) perceptions of and experience with the two-‐stage exam process, and how students’ perceptions of the exam were related to their exam performance. In this workshop, we will highlight results throughout an interactive session whereby participants engage in a mock two-stage exam and discuss their reactions to the process. This workshop is geared toward and faculty or staff who are teaching and interested in learning about the two-stage exam process. By the end of this session participants will have done the following: Reflected on the shared experience of participating in a mock two-stage exam Discussed perceived benefits and challenges of using two-stage exams into their courses Gained concrete tools and resources for implementing a two-stage exam.
Collecting meaningful, relevant and timely student feedback on teaching can be done in multiple ways. Student feedback on teaching may include the traditional course evaluations, but also instructor-designed surveys used during the term (early enough to make adjustments), focus groups (conducted by an educational consultant, TA’s or Peers), or class observations by trained undergraduate students. Participants will learn how to design survey questions (or SETC questions) that address their own questions about their course and their teaching practice. They will explore how to distill and analyze survey (or SETC) results, how to use these data to improve teaching and learning, and how to present relevant findings in their teaching dossier.
In 2018, an arbitration award involving Ryerson University ruled that the use of student evaluations of teaching (SET) as “a key tool in assessing teaching effectiveness is flawed, while the use of averages is fundamentally and irreparably flawed.” This dialogue session, aimed at faculty and administrators who are in the position of evaluating faculty as well as faculty who are interested in the ways in which teaching can be assessed beyond SET, will provide examples for how some departments at SFU are decreasing their reliance upon SET during promotions and salary reviews before opening the session up to the audience to explore the benefits and challenges of some of the alternative methods of assessment.
The competition for entrance to almost every type of academic program is incredible. Coveted professional programs – including medicine, law, business, physiotherapy – are receiving more applications than ever before. In this competitive environment, we propose that SFU’s letter grade-only system puts our students at a disadvantage and adversely affects student mental health. This panel discussion targets almost everyone involved in assessment at SFU, from students and faculty to administrators (who ultimately hold the key to change). The session outcomes are i) identifying and challenging perspectives on letter grades across role and discipline; ii) reviewing and challenging the available data – anecdotal and otherwise – on the effects of the letter grade system; and iii) considering solutions and next steps (and whether or not any steps need to be taken). The session content will include a panel that spans roles (i.e., faculty and students) and disciplines, presenting their perspectives and fielding questions; the remainder of content will be shaped by the audience, using three different approaches: solo written reactions, small group discussion, and large group discussion. The letter grade system, and its variability across campus, affects faculty and students, and dramatically impacts the learning environment at SFU. We should carefully consider these impacts.
Developing and evidencing teaching expertise has garnered much attention across the Canadian post-secondary landscape. Teaching expertise is multi-faceted and is developed through a learning process that continues over one’s career.1 This learning process is complex. The Developmental Framework for Teaching Expertise introduces three foundational values—inclusive, learning-centred, and collaborative—that ground five interwoven facets of teaching expertise (teaching and supporting learning, mentorship, professional learning and development, educational leadership, and research, scholarship and inquiry). This keynote presentation will demystify the process of developing teaching expertise and provide an opportunity for participants to actively explore how to support their professional learning to grow their teaching practice using the Developmental Framework for Teaching Expertise.
This presentation shares findings of a 4-year post-entry language assessment (PELA) project at the Beedie School of Business at SFU whereby business faculty and staff collaborated with language and literacy education faculty in an iterative process of design, delivery, assessment, and curriculum design and implementation. Informed by literature on post-entry language assessment (Fox et al., 2016), embedded and content and language integrated learning (Cammarata, et al., 2016; Murray and Nallaya, 2016), and theory and research in assessment in business education (Colby et. al, 2011), we highlight the challenges and opportunities of identifying and supporting the needs of multilingual undergraduate students once they have been mainstreamed alongside native speakers of English in the business program. For the presentation, we cover in broad strokes, 4 years of PELA results and trends that shed light on the overall outcomes of students’ writing in a first-year business program. Second, we discuss the process of faculty collaboration and content and language integration to ensure greater validity and reliability of assessment results, as well as to design curriculum and instructional responses to the data. It is hoped that this presentation will help develop a deeper understanding of university students’ linguistic performance and development within content courses, as well as inform future collaborative efforts between faculty and staff from different disciplines in curriculum and assessment design and delivery.
The Developmental Framework for Teaching Expertise introduces three foundational values—inclusive, learning-centred, and collaborative—that ground five interwoven facets of teaching expertise (teaching and supporting learning, mentorship, professional learning and development, educational leadership, and research, scholarship and inquiry). In this workshop participants will actively explore how to use the framework. Specifically, they will: map their everyday teaching and learning activities to facets of the framework along a development continuum; identify how to communicate strengths; and practice using the framework for dialogue with their peers and academic leaders.
One key challenge in academic programs is determining how the learning among individual classes and courses is coordinated and aligns with the program goals, which are often missing or vaguely described. In this session we tell the story of our effort to develop clear program goals for undergraduate (BA, BSc) and graduate (MPH) degree programs that are coherent yet flexible and that can inform the design and deployment of assessments across courses and levels of learning. An overarching goal for our effort is to offer everyone considering or coming into our programs a clear rationale for the teaching and learning activities they will experience, deliver, and/or support. The evidence gathered, and the lessons learned through our endeavor will be shared with the audience who may then make more informed decisions in their own efforts. Because we included students, staff, faculty, and administrators in our work, the target audience for this session is necessarily broad. The session will provide participants with a chance to represent their own stories of alignment/misalignment between program goals and learning assessment. By sharing different journeys and stages of curriculum reform we hope to reveal more effective and meaningful pathways to articulate and assess program level outcomes.
Applied linguistics is focusing greater attention on the importance of developing multilingual EAL (ME) students’ academic literacy skills within disciplinary contexts in post-secondary education (Jacobs, 2007). Despite this shift from a generic to discipline-specific approach, few studies have examined course-aligned models of support for ME learners in university courses in fine arts. This presentation attempts to address this gap by focusing on an interdisciplinary collaboration between a language education and fine arts faculty to design a course-aligned model of support in light of Wegner’s (1999) community of practice for ME students in a Canadian university. Data in this qualitative case study were gathered from classroom observations and language support sessions, pedagogical documentation, and student questionnaires. Data analysis has revealed that the theoretical foundations of the arts program can both limit and enable the model of academic literacy development. The latter may require scaffolding, structure, and rule-governed practices while the former encourages creativity and thinking “outside the box”. The findings can help Faculty and curriculum developers better understand issues in developing discipline-specific models of support for ME students. Recommendations for content-area faculty teaching linguistically diverse classes and curriculum developers designing models of support for ME students are offered at the end.
Given the importance of language and communication in Health Sciences as well as the increasing linguistic and cultural diversity in the student population in this discipline, it is important to introduce and implement initiatives and measures to better assess and address the language and communication needs of (especially, though not exclusively) English as an Additional Language (EAL) students, who make up an important segment of the student population at SFU. This oral presentation reports on the planning and the subsequent implementation of a pilot project to support (primarily, though not solely) EAL students in a foundational health science course at SFU. This pilot project was conducted collaboratively by the Centre for English Language Learning, Teaching and Research (CELLTR) and the Faculty of Health Sciences.
In an era of misinformation, educators face the daunting and important task of providing students with a strong foundation in information literacy. Academic librarians are committed to this challenge, and rely on a document called the Framework for Information Literacy for Higher Education to inform and shape instruction practices. The Framework posits that information literacy is comprised of six distinct but overlapping ideas, known as Frames, such as “Authority is Constructed and Contextual” and “Scholarship as Conversation.” Concepts from these Frames are often integrated into library instruction sessions. While the Framework provides guidance on which concepts to teach, librarians are required to determine their own context-specific measures of assessment. The Framework adopts Meyer and Land’s theory of threshold concepts to describe the process of knowledge acquisition, but establishing benchmarks for this cognitive transformation is notoriously difficult. This presentation introduces the audience to the Framework, and outlines how the research team (two librarians and a Computing Science instructor) exposed students to concepts from two of the Frames through a course assignment for a third-year Computing Science course. After discussing how we coded and assessed student responses, we invite feedback on our methodology and further collaboration to develop best practices
Imagination doesn't get much airtime in post-secondary education. When it does, it is often described as a "hook" for learning rather than the heart of engaging and effective teaching practice. Discussion of imagination in the context of assessment and evaluation of student learning is even more rare. This interactive workshop introduces an approach to teaching called Imaginative Education in which “cognitive tools” are employed to bring students’ emotional and imaginative engagement with subject matter to the heart of pedagogy. The aim of imaginative educators in all contexts is to provide opportunities for their students’ imaginative engagement all the way through the learning process. To do this, imaginative educators employ cognitive tools to create imaginative contexts for learning that include processes of assessment and evaluation. Following a brief introduction to Imaginative Education, cognitive tools and the distinctive features of students’ imaginative lives in post-secondary, participants will be invited to play with ideas. They will be encouraged to use cognitive tools to reimagine how their students can demonstrate their learning. Collaborate with colleagues in re-imagining how you assess and evaluate student learning and learn practical ways to make your assessment and evaluation practices more imaginative. See how different cognitive tools can be employed in the context of assessment and evaluation in post-secondary.
Why bother with a rubric? Do they matter to students? To Instructors? This session considers the importance, creation, and implementation of a rubric through active learning strategies. We will examine the creation of clear and concise rubrics that focus the student’s efforts, while supporting the instructor’s assessment.
The goal of this workshop is to familiarize faculty members with these options, how they can be implemented, their pros and cons, and the type, breadth and depth of information they can provide. While there will be an initial increase in workload when implementing these methods, a combination of multiple methods (e.g. surveys, observations, reflection) and data sources (students, peers, self) is not only suggested in SFU policies, it is also widely recommended as a more equitable, balanced, and realistic approach that allows for alignment with an individual instructor’s career path. This session will also address the more technical requirements for both the Tenure and Promotion process (e.g. documentation of teaching and related activities) and bi-annual review as well as information on what TPCs are looking for in their assessment (e.g. reflection and progress).
Although evaluating students may be simple, evaluating good teaching has proven to be far more difficult. The Teaching Assessment Working Group, created in 2017, conducted a survey of instructional staff at SFU in order to understand how teaching and learning is evaluated beyond the current Student Evaluation of Teaching and Courses (SETC) system. Participants report relying on feedback they receive from students, both formally and informally, when making changes to their teaching but they are concerned that SETCs appear to be the dominant factor considered by administrators in salary reviews. This session will discuss these findings as well participants’ perception about how teaching is valued at SFU and how teaching evaluation can be improved. Following our presentation, we will encourage audience members to discuss current methods of teaching evaluation, as well as if and how they can be improved. We hope that instructors and administrators will walk away with a better understanding of instructors’ perceptions of teaching evaluation systems at SFU.
Supporting students’ needs and respecting students’ diversity is far more possible than we think. Through experiential learning, you will gain a deeper understanding of the barriers many students encounter. You will develop practical actions to take, to make your classroom a better place to learn for all students. This session takes an experiential, collaborative approach, and discusses disabilities, language fluency, and other challenges students may face. This session is targeted towards individuals who deliver content, as well as those who design content, and those who may be involved in creation of academic spaces and policies.
Teaching assessment by peers provides a complementary angle to student assessment, and has become more popular recently at SFU and elsewhere. This session will examine the goals and scope of peer observation as well as models, options and guidelines for performing and for receiving peer feedback. Participants will evaluate the benefits and limitations of these models and discuss how to document and extract information, as well as how to present findings. Faculty members who have used peer assessment will share their own experiences of the practice.
Geared towards faculty, instructors and administrators this 60-minute workshop will help you learn how to design teaching assessment frameworks for career progress. After a brief introduction to the content of the SETC working group report, we will introduce a selection from the 75 different assessment methods that it contains, and provide guidance as to how to combine them to create an efficient but also balanced tool for assessing teaching achievement. Then we will invite participants to create their own assessment frameworks in groups, and discuss their advantages and disadvantages.
1) Personalizing Learning to Optimize Student Engagement Michael Maser (Education) 2) Emergence of a New Approach to the Dichotomy between Curriculum and Assessment Petra Menz (Mathematics) 3) Writing Social Concepts: Using a Focus Group to Assess Student Learning Sonja Luehrmann (Sociology/Anthropology), Marina Khonina (Sociology/Anthropology), and Marina Kadriu (Sociology/Anthropology) 4) Capitalizing on Digital Distraction in the Classroom: The Use of a Free Online Student Response System Atousa Hajshirmohammadi (Engineering Science) 5) Using Learning Analytics for Instructors’ Insights and Data-Driven Teaching Intervention Marek Hatala (Interactive Arts and Technology) 6) Surrey CityLab: Using Academic-Community Partnerships to Advance Experiential Learning Paola Ardiles (Health Sciences) and Henrietta Ezegbe (Health Sciences)
As instructors, we are told we need to read and respond to our student evaluations of teaching (SEoT). We work hard to provide good learning experiences for our students but the comments can be harsh and confusing (Hodges & Stanton, 2017) and send us into a spiral of negativity and self-doubt. So, how should we respond? Should we or do we: Avoid them? Read but ignore them? Read and obsess? Read and respond? At this roundtable session, we will acknowledge and honour the difficult emotional aspects of the SEoTs and provide an opportunity to discuss this in a collegial and empathetic environment. We will not be discussing the reliability and validity of SEoT, but rather will focus on what we need to do to take care of ourselves as instructors who receive SEoT reports an/or other anonymous comments about our teaching from students. Topics of our conversation will include: Sharing approaches to reading and/or responding to anonymous student comments Managing emotional reactions to SEoTs Reaching out to colleagues for support Instructor well-being
This session may be of interest to faculty members/instructors, graduate students, and administrators who are considering different strategies to provide feedback on teaching. In the spring of 2019, the School of Criminology Undergraduate Curriculum Committee designed and implemented a summative assessment process to evaluate our sessional instructors. The purpose of the assessment is to support and provide constructive feedback to our sessional instructors and to help them build their teaching portfolios. The pilot evaluation involved both classroom observations and a review of course materials. After implementation, we identified several challenges related to the processes and substance of the evaluation procedures.
After describing the summative assessment procedure, the facilitators will conduct a short mock lecture as part of an activity that will require audience members to create measures that capture the strengths and weaknesses of the lesson. The session outcomes are to facilitate rich dialogue on the topic of best practices of instructor evaluation that go beyond SETC and to engage in discussion about the challenges we faced in piloting our evaluation process. We hope that this session will facilitate the ongoing reform of the teaching evaluation process both within the School of Criminology and across the institution.
SFU’s online SETC system enables faculty members to add additional personalized instructor level items to the evaluation. These items can either be selected from the main item bank or created by the instructor, and they can be either multiple choice or open-ended. Given widespread debate about SETC, the purpose of this workshop will be to enable faculty and students to discuss and share their experience using this aspect of the system. The workshop will begin with a presentation of how one faculty member utilized this tool across two courses taught multiple times over a two-year academic period (n=822) and the insights obtained from an analysis of the data about learning and teaching practice. The data generated from this component of SETC can be used as evidence of student learning and to inform curriculum development and reflective teaching practice. Participants who have used SETC are asked to bring examples from past individualized reports and will be given the opportunity to share their experiences utilizing this tool. The session will be of particular interest to faculty members, administrators and curriculum development specialists interested in the student evaluation surveys.
Picture this: You provide detailed feedback on your students’ assignments, then watch as they flip to their grades and recycle the work on their way out the door. This is the “disposable assignment” - written for a single purpose (i.e. to obtain a grade) and then discarded. In this session, we invite you to learn about how Open Scholarship and Open Pedagogy offer alternative approaches for developing “renewable assignments” that can meaningfully engage students through: ● Creating content for open textbooks ● Publishing their work in open journals ● Developing content for Wikipedia ● And more! These approaches to teaching and learning can incentivize students to do their best work by broadening their assignment’s audience. At the same time, students can be involved in engaging and meaningful course projects which add their voices to the scholarly conversation and contribute to public knowledge.
In this discussion-based roundtable, we will look at examples of open assignments piloted by SFU and other institutions. We will also discuss opportunities for instructors to partner with SFU Library to support Open Pedagogy and Open Scholarship in the classroom and embrace new forms of learning and assessment beyond the disposable assignment.
At the University of the Fraser Valley (UFV), we learn and teach by design in active learning environments not limited to face-to-face pedagogy, hybrid learning, or fully online education (“Hybrid Learning at UFV: Maximizing Space and Pedagogical Resources,” Igobwa & Wideman, 2019). These three modes of student-centred learning are facilitated in collaboration with the learners and educators while drawing dialectically from our “Five Goals for the Education Plan 2016-2020 (https://ufv.ca/media/assets/provost/education-plans/UFV-Education-Plan-Goals.pdf ).” Our proposal falls under the conference theme “Celebrating Teaching” and the sub-section “Exploring the institutional commitment to teaching excellence.” Course review is a collaborative effort between those involved in learning and teaching and our Teaching and Learning Centre. “Check Under the Hood” is an initiative that supports faculty in reviewing courses in a collegial manner while using an “appreciative inquiry” perspective (Appreciative Inquiry in Higher Education: A Transformative Force, Jeanie Cockell and Joan McArthur-Blair, 2012). And this is our commitment to nurture teaching excellence at UFV. During our presentation, we will discuss how faculty receive and review our recommendations/feedback and the recommendations that they elect to adapt for their course/s. Additionally, we will highlight selected “before” and “after” courses to demonstrate this process. We will collegially solicit your feedback as we endeavour to further improve our course review strategy.
Developing critical reading skills of primary scientific literature is a foundational practice for scientific literacy. In this workshop, we will use a combined format (data presentation, application, & discussion) to present data that informed development of a 5-step reading approach and its impact on student learning in a large second year classroom. Participants will have the opportunity to apply the reading approach to a paper of their choosing. This will be of interest to instructors who are seeking teaching approaches that facilitate reading comprehension and to students who wish to enhance their reading skills.
In this workshop participants will: ● Consider interview and survey data of student reading approaches that informed the 5-step reading activity ● Evaluate the impact of the 5-step reading approach on a large 200-level classroom ● Review and apply the 5-step reading approach to a primary scientific literature article* ● Discuss general strategies for facilitating student critical reading practices *Participants may bring in a primary scientific article of their choosing or use a supplied article
A journal is a window into how learners are responding to course activities, content and experiences. Empowering the learner (and ourselves) to express what we know, how we have come to know it and the significance of the learning in diverse and multi-modal ways, opens up windows into our thinking. The power in journaling provokes dialogue about authentic assessment and offers a platform to celebrate/curate diverse forms of expression. Participants will discover a variety of resources helpful to prompt journaling, engage in conversations of assessment and be inspired to use journaling in their courses. Join us in a carousel of stations, pair and share debriefs, photographs/samples of reflective thinking and take away the link to a shared folder of resources. David uses journals in many aspects of mentoring/teaching in the Graduate Diploma of Education (Nature-Based Education + Experiential Learning), in his varied professional work and in summer adventures.
Teaching spatial thinking as a separate course in its own right helps students develop spatial abilities. We will discuss our findings based on an assessment we conducted to explore how first year students demonstrate their representational skills in solving a spatial (design) problem requiring both analysis and synthesis. We collected and analyzed the video data using visual analytics tools and conducted an independent expert panel review of the student artifacts with an assessment rubric. Our findings show high variability in students’ solutions and their representations. We discovered, somewhat unexpectedly, students used embodied actions in their process. Session participants will be introduced to a post‐course assessment method in determining the fidelity between teaching and learning activities, and the results of these activities. Our approach and results of the assessment may be useful and inspiring to others involved in course and program assessment activities. Our session will begin with a short presentation of the assessment method and findings followed by an interactive discussion based on the questions posed by presenters.
Self-assessment and reflection constitute the third ingredient of teaching assessment, in addition to documentation of the teaching activities and data collection on effectiveness and perception. The commonly used format for this is the teaching dossier (or portfolio), including the teaching philosophy statement. In this workshop, participants will get started on preparing their own dossier by reviewing the rationale for and scope of teaching dossiers, and comparing several models for teaching dossiers in various disciplines. They will explore how to create a Philosophy of Teaching statement, discuss approaches for documenting teaching activities as well as course reflections, and engage in peer review of the results
This session is intended for instructors interested in the complexity of understanding what students know and can do. Participants will consider the challenge of assessing students accurately and embracing the ambiguity inherent in making judgments about the knowledge and skills of other people. Classical test theory tells us that the scores we observe in the test results of our students are the sum of their true ability and some amount of undefined error, which has implications for the judgments we make. Expanding this principle to assessments broadly, participants will consider three key questions: 1. Why do we assess students? 2. What are potential sources of error in understanding what students know and can do? 3. What is the role of fairness, equity, and equality in assessing individual students? 4. How can we acknowledge these sources of error and support our students? Student self-evaluation and instructor/student conversations will be presented as techniques that may help us better understand the learning of our students. This session will engage participants in assessment theory, the role of error, concepts of reliability and validity in assessment, and ultimately lead to a broader dialogue about how we engage in understanding student learning in our classes.
SFU has a strong record of teaching excellence, with a history of university level awards stretching back into the 1980’s and a considerable representation of national level recognition (e.g. 3M teaching awards). But what does it take to reach this level of accomplishment and establish a culture of teaching excellence? As members of SFU’s Teaching Assessment Working Group (TAWG) we invite a variety of previous award winners to share significant moments, turning points and practices in their teaching that contributed to their success, in order to highlight common threads in what award-winning teachers do. With an eye towards inspiring future award winners and celebrating the success of faculty, this session will be an opportunity to reflect on what we are doing well as a teaching community at SFU.
The Teaching Assessment Working Group will release a draft version of our final report in April 2019. This session will allow faculty and administrators to examine and comment on the recommendations. The draft recommendations will be presented, and participants will be asked to consider the recommendations and make suggestions. Feedback from the session will be used to inform final revisions of the report.