Building Critical
AI Literacy
A self-paced module for university educators — designing a professional development resource for AI-integrated assessment design in Australian higher education.
Context
The target of this resource is the higher education sector, specifically addressing university teaching and learning in the context of developing critical AI literacy for academic and sessional staff at the University of the Sunshine Coast (UniSC) through the Centre for Support and Advancement of Learning and Teaching (CSALT). The cohort has varying levels of confidence in the AI and digital learning space, with limited professional learning time in the face of an ever-growing need for critical response to generative AI in their teaching. The learning focus is critical AI literacy with purposeful integration of AI into assessment design.
The central problem is highlighted by Australia’s Tertiary Education Quality and Standards Agency, which recommends structural reform of assessment in response to AI (Lodge et al., 2025). Lodge et al. state that detecting AI use with certainty in assessment is, at this point, all but impossible — we need alternative approaches to complement academic integrity processes (p. 2, 2025). There is a lack of training that is structured and sustained to move beyond current approaches of AI detection towards purposeful and pedagogically grounded rethinking of assessment design with AI. This gap aligns with UniSC’s three key service areas for advancing learning and teaching performance: Education technologies, Curriculum support, and Professional learning.
“Detecting AI use with certainty in assessment is, at this point, all but impossible. We need alternative approaches to complement academic integrity processes.”
— Lodge et al. (2025, p. 2) · TEQSA: Enacting Assessment Reform in a Time of Artificial Intelligence
Assessment as a connected system spanning an entire degree, with developmental progression and multiple secure points across the program.
At least one secure assessment task in every unit, with controlled conditions and identity verification. Preserves unit coordinator autonomy.
Strategic blend of program-wide principles in some areas with unit-level assurance in others. Suits complex degree structures.
The AI Assessment Challenge — illustrating the gap between TEQSA’s reform expectations (Lodge et al., 2025) and the current state of academic AI literacy.
Solution
To meet this challenge, a four-week, self-paced, asynchronous module via UniSC’s Canvas Learning Management System — AI-Integrated Assessment Design for Educators — is proposed. Taking into account the reality of time-constrained academics and sessional staff, an asynchronous, modular structure allows engagement with content during busy teaching commitments (Knowles, 1984). Week 1 introduces foundational AI concepts and critical AI literacy frameworks; Week 2 examines ethical dimensions, institutional policy and bias; Week 3 focuses on designing AI-integrated assessments; and Week 4 concludes with a practical assessment redesign capstone with structured peer review.
This scaffolded progression aligns with the University of South Carolina’s Teaching with GenAI webinar series (USC Center for Teaching Excellence, 2026), where educators are transitioned from foundations through pedagogical integrations to reflective synthesis. This approach also complements UniSC’s existing ‘Foundations of University Teaching’ professional learning course, extending its curriculum support into emerging AI pedagogies. The four-week structure responds to Lodge et al.’s (2025) three assessment reform pathways — staff will be equipped to meaningfully contribute to whichever pathway their institution adopts:
Foundational AI concepts and critical AI literacy frameworks.
Ethical dimensions, institutional policy, and bias in AI systems.
Designing AI-integrated assessments with planning tools.
Practical assessment redesign capstone with structured peer review.
This scaffolded progression aligns with the University of South Carolina’s Teaching with GenAI webinar series (USC Center for Teaching Excellence, 2026) and complements UniSC’s existing ‘Foundations of University Teaching’ course. The four-week structure responds to Lodge et al.’s (2025) three assessment reform pathways:
- Program-wide redesign
- Unit-level assurance
- Hybrid approaches
Chunking content into weekly modules aims to reduce cognitive load on busy teaching staff (Sweller, 1988). The course is estimated to require approximately 5–7 hours to complete.
What is generative AI? Foundational concepts and critical AI literacy frameworks.
Ethical implications, bias, academic integrity, institutional policy and TEQSA’s reform pathways.
Strategies for purposefully incorporating AI into assessment tasks using the Planning Worksheet.
Submit a redesigned assessment from your own teaching context. Structured peer review builds evaluative judgement.
Theories: Knowles (1984) Adult Learning · Sweller (1988) Cognitive Load · Kolb (1984) Experiential Learning
Four-week module structure showing scaffolded progression from foundational AI literacy to practical assessment redesign, aligned with the OECD/EC AI Literacy Framework domains and Lodge et al.’s (2025) reform pathways.
Learning Content
The learning content will be delivered through Canvas as UniSC’s primary platform for learning and teaching (WCAG 2.1 compliant), utilising short captioned videos (2–5 minutes with transcripts), curated readings, interactive H5P activities that include scenario-based decision tasks, and downloadable templates including an AI Assessment Planning Worksheet. These means of engagement and representation align with Universal Design for Learning (CAST, 2018) and Mayer’s (2009) multimedia learning principles. To ensure accessibility throughout the modules, all images will be accompanied with descriptive alt text, videos include captions and downloadable transcripts, and downloadable templates use accessible formatting with high-contrast design. This mirrors the resource and platform approach implemented and validated by the Auburn University and USC collaborative Teaching with AI Canvas course, which delivers targeted faculty professional learning through self-paced Canvas modules at institutional scale (Auburn University Biggio Center & USC, 2025).
Universal Design for Learning
UDL is the organising framework for how content is designed and delivered across the module. Rather than retrofitting accessibility as a compliance measure, UDL principles are embedded from the outset — anticipating the genuine diversity of a teaching workforce that includes early-career sessionals, experienced academics, staff across multiple campuses, and those with varying levels of digital confidence. The module addresses all three UDL principles explicitly:
The same content is offered in multiple formats: short video with captions and transcript, written readings, and visual infographics. Learners can engage with whichever mode suits their context — whether commuting, at a desk, or in a noisy staffroom.
Participants demonstrate understanding through H5P branching scenarios, discussion board contributions, a planning worksheet, and a written capstone rationale — offering varied pathways to express professional learning rather than a single response format.
Self-paced delivery respects individual time constraints. Optional synchronous Zoom sessions, structured peer review, and disciplinary discussion boards provide multiple points of connection without mandating a single mode of participation.
Accessibility
All images carry descriptive alt text. Videos include closed captions enabled by default, with downloadable transcripts in accessible PDF format. Downloadable templates — including the AI Assessment Planning Worksheet — use tagged headings, high-contrast formatting, and are compatible with screen readers. The Canvas platform meets WCAG 2.1 AA compliance standards, and all H5P activities are tested against Canvas’s built-in accessibility checker before publication.
Accessibility is not treated as a compliance checklist but as a modelling decision: a module about inclusive assessment design should itself demonstrate inclusive design. This gives participants a direct experience of what UDL looks and feels like as a learner, before they apply it in their own teaching contexts.
Validated External Model
The self-paced Canvas delivery approach mirrors the Auburn University and USC collaborative Teaching with AI course, which delivers targeted faculty professional learning through asynchronous modules at institutional scale (Auburn University Biggio Center & USC, 2025).
Banner from the Auburn University and USC Teaching with AI self-paced Canvas course — the validated model informing this module’s design and delivery approach.
Mock-up of the Week 3 Canvas module page showing captioned video, interactive H5P scenario activity, downloadable planning worksheet, and accessibility annotations (CC captions, alt text, WCAG 2.1 compliance).
Community, Practice & Assessment
Community of Inquiry
Peer interaction is facilitated through weekly discussion boards on Canvas where participants can share disciplinary examples of AI-vulnerable assessments and draft redesigns, which will help build a community of inquiry across teaching, social, and cognitive dimensions (Garrison et al., 2000). There will also be the option for fortnightly Zoom Q&A sessions with a UniSC CSALT educational designer to provide synchronous support for those who would like to take up the opportunity. These interactions support CSALT’s strategic initiative to advance blended learning approaches across the University.
Practical Engagement
Practical elements are scaffolded across all four weeks: experimenting with AI tools (Week 1), auditing an existing assessment for AI vulnerability (Week 2), drafting a redesigned assessment using the planning worksheet (Week 3), and conducting structured peer review (Week 4). This staged progression models Kolb’s (1984) experiential learning cycle, ensuring that each practical activity builds towards a concrete and transferable output. This curriculum support approach aligns with CSALT’s portfolio of services to advance UniSC’s learning and teaching performance, culture and profile.
Capstone Assessment
Learning will be assessed through the Week 4 capstone task: participants submit a redesigned assessment from their individual teaching contexts, including a written rationale explaining design decisions, the AI literacy framework informing their choices, and how the task addresses both integrity and purposeful AI integration. Peer review helps develop evaluative judgement — the capacity to assess quality in one’s own and others’ work (Tai et al., 2018) — which is itself a critical dimension of AI literacy. This authentic assessment model reflects Wiggins’ (1990) principle that assessment tasks should reflect real professional work. The capstone output contributes to UniSC’s graduate attributes by developing staff capacity to design assessments that equip students to participate ethically, critically, and actively in an AI-ubiquitous society.
Part B: Deliver a 10-minute live pitch to peers, responding to unscripted Q&A. Submit an annotated AI interaction log showing your prompts and critical evaluation process.
Part B: Maintain a weekly design diary (4 entries) documenting iterative decisions and how AI tools were used — and why suggestions were accepted, modified, or rejected.
Part C: 5-minute recorded walkthrough referencing specific students (de-identified) and placement observations.
Before and after comparison: traditional AI-vulnerable assessment prompts redesigned to purposefully integrate AI, with design rationale annotations — the type of output participants produce in the Week 4 capstone.
The Case for Structured, Self-Paced Training
This proposed course design is grounded in the principle that sustained, scaffolded engagement builds deeper professional capability than one-off professional learning. The four-week structure allows academics to move through each of the OECD/European Commission’s AI literacy domains — engaging with AI, creating with AI, managing AI, and designing AI — at a pace that accommodates teaching loads (OECD & European Commission, 2025). The knowledge–skills–attitudes triad embedded in that framework justifies using multiple content types: readings build conceptual understanding, H5P activities develop practical skills, and reflective prompts cultivate responsible dispositions.
Critically, self-paced Canvas delivery has been proven as an effective platform for faculty professional learning at scale. This model aligns with CSALT’s approach to delivering scalable, high-quality educational experiences that support staff across all campuses and modalities. The USC/Auburn Teaching with AI course reaches faculty across multiple institutions using this model (Auburn University Biggio Center & USC, 2025), while USC’s GenAI webinar series demonstrates that a three-phase pathway — foundations, integration, synthesis — sustains educator engagement across a semester (USC Center for Teaching Excellence, 2026).
Assessment should equip students to participate ethically, critically, and actively in a society where generative AI is ubiquitous — a goal that first requires educators themselves to develop these skills and capabilities.
By embedding assessment redesign as the culminating task, the design ensures transfer to practice: participants finish with a concrete and useable artefact rather than abstract knowledge. This design also aligns with Lodge et al.’s (2025) recommendation for structural assessment reform, which emphasises that assessment should equip students to participate ethically, critically, and actively in a society where generative AI is ubiquitous. While this is optional professional learning, it is also institutionally strategic. Positioned within CSALT’s professional learning service area and reporting to the Deputy Vice-Chancellor (Academic), this module contributes to UniSC’s strategic priorities for curriculum innovation and educational technology enhancement.
Anticipated Insights & Open Questions
The implementation of this module at UniSC would likely surface important questions about academic readiness: how confident are staff in their own AI literacy, and does this vary by discipline or career stage? Do sessional staff — who represent a significant proportion of UniSC’s workforce — engage differently with voluntary, asynchronous professional learning than continuing academics?
Completion rates in self-paced programs are historically low; understanding why participants disengage — time pressure, perceived irrelevance, lack of institutional recognition — would be as valuable as measuring who completes the module. The peer review component may further reveal whether structured feedback genuinely builds evaluative judgement in this context, or whether staff need more facilitated guidance to provide meaningful critique.
Finally, the module may reveal institutional barriers beyond individual capability — unclear AI policies, inconsistent LMS support, or disciplinary norms that resist assessment innovation — which would inform broader institutional strategy. These insights would directly support CSALT’s role in providing curriculum and pedagogical advice for academic success and informing University-wide learning and teaching strategy.