Building Critical AI Literacy — Portfolio

Context

The target of this resource is the higher education sector, specifically addressing university teaching and learning in the context of developing critical AI literacy for academic and sessional staff at the University of the Sunshine Coast (UniSC) through the Centre for Support and Advancement of Learning and Teaching (CSALT). The cohort has varying levels of confidence in the AI and digital learning space, with limited professional learning time in the face of an ever-growing need for critical response to generative AI in their teaching. The learning focus is critical AI literacy with purposeful integration of AI into assessment design.

The central problem is highlighted by Australia’s Tertiary Education Quality and Standards Agency, which recommends structural reform of assessment in response to AI (Lodge et al., 2025). Lodge et al. state that detecting AI use with certainty in assessment is, at this point, all but impossible — we need alternative approaches to complement academic integrity processes (p. 2, 2025). There is a lack of training that is structured and sustained to move beyond current approaches of AI detection towards purposeful and pedagogically grounded rethinking of assessment design with AI. This gap aligns with UniSC’s three key service areas for advancing learning and teaching performance: Education technologies, Curriculum support, and Professional learning.

The AI Assessment Challenge
Why Australian universities must move from AI detection to purposeful assessment redesign

“Detecting AI use with certainty in assessment is, at this point, all but impossible. We need alternative approaches to complement academic integrity processes.”

— Lodge et al. (2025, p. 2) · TEQSA: Enacting Assessment Reform in a Time of Artificial Intelligence

0%
Detection Reliability
No AI detection tool can reliably identify AI-generated content. False positives penalise students unfairly.
3
Reform Pathways Identified
Program-wide, unit-level, and hybrid approaches to structural assessment reform (Lodge et al., 2025).
5
Guiding Propositions
Authentic AI engagement, systemic program alignment, learning process focus, collaborative AI use, and meaningful security points.
↓   TEQSA’s Three Assessment Reform Pathways   ↓
Pathway 1
Program-Wide Redesign

Assessment as a connected system spanning an entire degree, with developmental progression and multiple secure points across the program.

Pathway 2
Unit-Level Assurance

At least one secure assessment task in every unit, with controlled conditions and identity verification. Preserves unit coordinator autonomy.

Pathway 3
Hybrid Approach

Strategic blend of program-wide principles in some areas with unit-level assurance in others. Suits complex degree structures.

Source: Lodge, J. M., Bearman, M., Dawson, P., Gniel, H., Harper, R., Liu, D., McLean, J., Ucnik, L., & Associates. (2025). TEQSA.
Figure 1

The AI Assessment Challenge — illustrating the gap between TEQSA’s reform expectations (Lodge et al., 2025) and the current state of academic AI literacy.

Solution

To meet this challenge, a four-week, self-paced, asynchronous module via UniSC’s Canvas Learning Management System — AI-Integrated Assessment Design for Educators — is proposed. Taking into account the reality of time-constrained academics and sessional staff, an asynchronous, modular structure allows engagement with content during busy teaching commitments (Knowles, 1984). Week 1 introduces foundational AI concepts and critical AI literacy frameworks; Week 2 examines ethical dimensions, institutional policy and bias; Week 3 focuses on designing AI-integrated assessments; and Week 4 concludes with a practical assessment redesign capstone with structured peer review.

This scaffolded progression aligns with the University of South Carolina’s Teaching with GenAI webinar series (USC Center for Teaching Excellence, 2026), where educators are transitioned from foundations through pedagogical integrations to reflective synthesis. This approach also complements UniSC’s existing ‘Foundations of University Teaching’ professional learning course, extending its curriculum support into emerging AI pedagogies. The four-week structure responds to Lodge et al.’s (2025) three assessment reform pathways — staff will be equipped to meaningfully contribute to whichever pathway their institution adopts:

Week 01
Foundations

Foundational AI concepts and critical AI literacy frameworks.

Week 02
Ethics & Policy

Ethical dimensions, institutional policy, and bias in AI systems.

Week 03
Assessment Design

Designing AI-integrated assessments with planning tools.

Week 04
Redesign & Review

Practical assessment redesign capstone with structured peer review.

This scaffolded progression aligns with the University of South Carolina’s Teaching with GenAI webinar series (USC Center for Teaching Excellence, 2026) and complements UniSC’s existing ‘Foundations of University Teaching’ course. The four-week structure responds to Lodge et al.’s (2025) three assessment reform pathways:

  • Program-wide redesign
  • Unit-level assurance
  • Hybrid approaches

Chunking content into weekly modules aims to reduce cognitive load on busy teaching staff (Sweller, 1988). The course is estimated to require approximately 5–7 hours to complete.

Self-Paced Canvas Module
AI-Integrated Assessment Design for Educators
Four-week scaffolded pathway from foundational AI literacy to practical assessment redesign
Foundational Literacy
Practical Application ▶
Week 1
Understanding AI

What is generative AI? Foundational concepts and critical AI literacy frameworks.

AI Tool Experimentation
Engaging with AI
Week 2
Evaluating AI in Education

Ethical implications, bias, academic integrity, institutional policy and TEQSA’s reform pathways.

Assessment Vulnerability Audit
Ethics & Evaluation
Week 3
Designing AI-Integrated Assessments

Strategies for purposefully incorporating AI into assessment tasks using the Planning Worksheet.

Assessment Redesign Draft
Creating & Managing
Week 4
Apply & Share

Submit a redesigned assessment from your own teaching context. Structured peer review builds evaluative judgement.

Peer Review & Capstone
Synthesis & Reflection
Aligned to: OECD/EC AI Literacy Framework (2025) · Lodge et al. (2025) TEQSA Pathways · CAST UDL (2018)
Theories: Knowles (1984) Adult Learning · Sweller (1988) Cognitive Load · Kolb (1984) Experiential Learning
Self-PacedCanvasAccessible
Figure 2

Four-week module structure showing scaffolded progression from foundational AI literacy to practical assessment redesign, aligned with the OECD/EC AI Literacy Framework domains and Lodge et al.’s (2025) reform pathways.

Learning Content

The learning content will be delivered through Canvas as UniSC’s primary platform for learning and teaching (WCAG 2.1 compliant), utilising short captioned videos (2–5 minutes with transcripts), curated readings, interactive H5P activities that include scenario-based decision tasks, and downloadable templates including an AI Assessment Planning Worksheet. These means of engagement and representation align with Universal Design for Learning (CAST, 2018) and Mayer’s (2009) multimedia learning principles. To ensure accessibility throughout the modules, all images will be accompanied with descriptive alt text, videos include captions and downloadable transcripts, and downloadable templates use accessible formatting with high-contrast design. This mirrors the resource and platform approach implemented and validated by the Auburn University and USC collaborative Teaching with AI Canvas course, which delivers targeted faculty professional learning through self-paced Canvas modules at institutional scale (Auburn University Biggio Center & USC, 2025).

Universal Design for Learning

UDL is the organising framework for how content is designed and delivered across the module. Rather than retrofitting accessibility as a compliance measure, UDL principles are embedded from the outset — anticipating the genuine diversity of a teaching workforce that includes early-career sessionals, experienced academics, staff across multiple campuses, and those with varying levels of digital confidence. The module addresses all three UDL principles explicitly:

I
Multiple Means of Representation

The same content is offered in multiple formats: short video with captions and transcript, written readings, and visual infographics. Learners can engage with whichever mode suits their context — whether commuting, at a desk, or in a noisy staffroom.

II
Multiple Means of Action & Expression

Participants demonstrate understanding through H5P branching scenarios, discussion board contributions, a planning worksheet, and a written capstone rationale — offering varied pathways to express professional learning rather than a single response format.

III
Multiple Means of Engagement

Self-paced delivery respects individual time constraints. Optional synchronous Zoom sessions, structured peer review, and disciplinary discussion boards provide multiple points of connection without mandating a single mode of participation.

Accessibility

All images carry descriptive alt text. Videos include closed captions enabled by default, with downloadable transcripts in accessible PDF format. Downloadable templates — including the AI Assessment Planning Worksheet — use tagged headings, high-contrast formatting, and are compatible with screen readers. The Canvas platform meets WCAG 2.1 AA compliance standards, and all H5P activities are tested against Canvas’s built-in accessibility checker before publication.

Accessibility is not treated as a compliance checklist but as a modelling decision: a module about inclusive assessment design should itself demonstrate inclusive design. This gives participants a direct experience of what UDL looks and feels like as a learner, before they apply it in their own teaching contexts.

Validated External Model

The self-paced Canvas delivery approach mirrors the Auburn University and USC collaborative Teaching with AI course, which delivers targeted faculty professional learning through asynchronous modules at institutional scale (Auburn University Biggio Center & USC, 2025).

AI
Collaborative Faculty Development · Auburn University & University of South Carolina
Teaching with AI
A self-paced course for faculty — delivered via Canvas LMS
Format
Self-paced · Asynchronous · Canvas
Audience
University faculty across multiple institutions
Relevance
Validated model for institutional-scale faculty AI literacy training
Auburn University Biggio Center & USC Center for Teaching Excellence. (2025). biggio.auburn.edu
Figure 3

Banner from the Auburn University and USC Teaching with AI self-paced Canvas course — the validated model informing this module’s design and delivery approach.

AI-Integrated Assessment Design Modules Week 3
Module Progress
Week 3 of 4 — 50% complete
⌂ Home
☰ Modules
✎ Week 3: Design
💬 Discussions
👥 People
✓ Grades
Week 3
Designing AI-Integrated Assessments
Learning Outcomes: By the end of this module you will be able to (1) evaluate an existing assessment for AI vulnerability, (2) redesign an assessment task that purposefully integrates AI, and (3) articulate the pedagogical rationale for your design choices.
✓ ACCESSIBILITY: Closed captions enabled by default + downloadable transcript
▶ Video
From Detection to Design: Rethinking Assessment in the Age of AI4:12
CC Captions ON⬇ Download transcript (PDF)
✓ UDL: Multiple means of engagement — interactive scenario with immediate feedback
⚡ Interactive Activity
H5P Interactive ContentBranching Scenario
A second-year Business unit requires students to “Write a 2000-word report analysing a company’s competitive strategy.” Is this assessment AI-vulnerable?
Yes — AI could complete this entire task to a passing standard
Partially — AI could assist but the task has some secure elements
No — this task is adequately resistant to AI misuse
⬇ Downloadable Resource
AI Assessment Planning Worksheet
Word Document · 45 KB · Accessible template (tagged headings, alt text, high contrast)
⬇ Download
💬 Discussion
Week 3 Discussion Prompt
Choose one assessment from your own teaching context. Identify its main AI vulnerability and one change you would make to address it. Post your response (150–200 words) and reply to at least one colleague’s post with constructive feedback.
Figure 4

Mock-up of the Week 3 Canvas module page showing captioned video, interactive H5P scenario activity, downloadable planning worksheet, and accessibility annotations (CC captions, alt text, WCAG 2.1 compliance).

Community, Practice & Assessment

Community of Inquiry

Peer interaction is facilitated through weekly discussion boards on Canvas where participants can share disciplinary examples of AI-vulnerable assessments and draft redesigns, which will help build a community of inquiry across teaching, social, and cognitive dimensions (Garrison et al., 2000). There will also be the option for fortnightly Zoom Q&A sessions with a UniSC CSALT educational designer to provide synchronous support for those who would like to take up the opportunity. These interactions support CSALT’s strategic initiative to advance blended learning approaches across the University.

Practical Engagement

Practical elements are scaffolded across all four weeks: experimenting with AI tools (Week 1), auditing an existing assessment for AI vulnerability (Week 2), drafting a redesigned assessment using the planning worksheet (Week 3), and conducting structured peer review (Week 4). This staged progression models Kolb’s (1984) experiential learning cycle, ensuring that each practical activity builds towards a concrete and transferable output. This curriculum support approach aligns with CSALT’s portfolio of services to advance UniSC’s learning and teaching performance, culture and profile.

Capstone Assessment

Learning will be assessed through the Week 4 capstone task: participants submit a redesigned assessment from their individual teaching contexts, including a written rationale explaining design decisions, the AI literacy framework informing their choices, and how the task addresses both integrity and purposeful AI integration. Peer review helps develop evaluative judgement — the capacity to assess quality in one’s own and others’ work (Tai et al., 2018) — which is itself a critical dimension of AI literacy. This authentic assessment model reflects Wiggins’ (1990) principle that assessment tasks should reflect real professional work. The capstone output contributes to UniSC’s graduate attributes by developing staff capacity to design assessments that equip students to participate ethically, critically, and actively in an AI-ubiquitous society.

Assessment Redesign Examples
Before & After: From AI-Vulnerable to AI-Integrated
AI-Vulnerable (Before) AI-Integrated (After) Design Rationale
📊 Business — Strategic Management
BUS202 · Year 2 · UniSC
✗ Before: Traditional Prompt
Assessment Task
“Write a 2,000-word report analysing a company’s competitive strategy.”
Students choose any ASX-listed company. Submitted online via Turnitin. No oral component, presentation, or in-class activity. Rubric assesses structure, analysis quality, and referencing.
Fully AI-generatable No identity verification Generic prompt Product-only assessment
✓ After: Redesigned Assessment
Redesigned Task
Live Strategic Pitch + AI-Assisted Competitive Analysis
Part A: Use AI to generate a preliminary SWOT analysis of an assigned company. Critically evaluate the AI output: identify 3 errors or gaps and correct them using primary sources (annual reports, ASX filings).

Part B: Deliver a 10-minute live pitch to peers, responding to unscripted Q&A. Submit an annotated AI interaction log showing your prompts and critical evaluation process.
Live oral component Process visible AI used purposefully Critical evaluation required
Design Rationale
The redesign shifts from a product-only task (where AI can produce the entire output) to a process-and-performance task that makes student thinking visible. AI is integrated purposefully — students use it, then critically evaluate its output against primary sources. The live pitch and unscripted Q&A verify understanding and identity. Aligned with Lodge et al. (2025) Pathway 2; Wiggins (1990) authentic assessment.
🎓 Education — Curriculum Design
EDU301 · Year 3 · UniSC
✗ Before: Traditional Prompt
Assessment Task
“Design a unit of work for a Year 8 English class aligned with the Australian Curriculum.”
Students submit a 3,000-word written unit plan. No practicum component. Submitted online. Assessed on alignment, creativity, and theoretical justification.
AI can generate curriculum plans No contextual grounding Unsupervised submission Generic context
✓ After: Redesigned Assessment
Redesigned Task
Contextualised Unit Plan + Reflective Design Diary
Part A: Design a unit of work for a specific class observed during practicum placement, responding to that cohort’s identified learning needs and cultural context.

Part B: Maintain a weekly design diary (4 entries) documenting iterative decisions and how AI tools were used — and why suggestions were accepted, modified, or rejected.

Part C: 5-minute recorded walkthrough referencing specific students (de-identified) and placement observations.
Grounded in lived experience Iterative process visible AI as design partner Oral explanation of reasoning
Design Rationale
Grounding the task in a specific practicum context makes it impossible for AI to generate the response — AI cannot know the student’s placement class, observed learning needs, or local context. The design diary captures process over product, documenting AI as a thinking partner rather than a ghostwriter. Aligned with Kolb (1984) experiential learning; Tai et al. (2018) evaluative judgement; OECD & EC (2025) “Creating with AI” domain.
Key principle: Rather than investing in detection, redesign assessment to capture authentic demonstrations of student capability (Lodge et al., 2025, p. 3) Lodge et al. (2025) · TEQSA Reform Pathways
Figure 5

Before and after comparison: traditional AI-vulnerable assessment prompts redesigned to purposefully integrate AI, with design rationale annotations — the type of output participants produce in the Week 4 capstone.

The Case for Structured, Self-Paced Training

This proposed course design is grounded in the principle that sustained, scaffolded engagement builds deeper professional capability than one-off professional learning. The four-week structure allows academics to move through each of the OECD/European Commission’s AI literacy domains — engaging with AI, creating with AI, managing AI, and designing AI — at a pace that accommodates teaching loads (OECD & European Commission, 2025). The knowledge–skills–attitudes triad embedded in that framework justifies using multiple content types: readings build conceptual understanding, H5P activities develop practical skills, and reflective prompts cultivate responsible dispositions.

Critically, self-paced Canvas delivery has been proven as an effective platform for faculty professional learning at scale. This model aligns with CSALT’s approach to delivering scalable, high-quality educational experiences that support staff across all campuses and modalities. The USC/Auburn Teaching with AI course reaches faculty across multiple institutions using this model (Auburn University Biggio Center & USC, 2025), while USC’s GenAI webinar series demonstrates that a three-phase pathway — foundations, integration, synthesis — sustains educator engagement across a semester (USC Center for Teaching Excellence, 2026).

Assessment should equip students to participate ethically, critically, and actively in a society where generative AI is ubiquitous — a goal that first requires educators themselves to develop these skills and capabilities.

By embedding assessment redesign as the culminating task, the design ensures transfer to practice: participants finish with a concrete and useable artefact rather than abstract knowledge. This design also aligns with Lodge et al.’s (2025) recommendation for structural assessment reform, which emphasises that assessment should equip students to participate ethically, critically, and actively in a society where generative AI is ubiquitous. While this is optional professional learning, it is also institutionally strategic. Positioned within CSALT’s professional learning service area and reporting to the Deputy Vice-Chancellor (Academic), this module contributes to UniSC’s strategic priorities for curriculum innovation and educational technology enhancement.

Anticipated Insights & Open Questions

The implementation of this module at UniSC would likely surface important questions about academic readiness: how confident are staff in their own AI literacy, and does this vary by discipline or career stage? Do sessional staff — who represent a significant proportion of UniSC’s workforce — engage differently with voluntary, asynchronous professional learning than continuing academics?

Completion rates in self-paced programs are historically low; understanding why participants disengage — time pressure, perceived irrelevance, lack of institutional recognition — would be as valuable as measuring who completes the module. The peer review component may further reveal whether structured feedback genuinely builds evaluative judgement in this context, or whether staff need more facilitated guidance to provide meaningful critique.

Finally, the module may reveal institutional barriers beyond individual capability — unclear AI policies, inconsistent LMS support, or disciplinary norms that resist assessment innovation — which would inform broader institutional strategy. These insights would directly support CSALT’s role in providing curriculum and pedagogical advice for academic success and informing University-wide learning and teaching strategy.

References

Auburn University Biggio Center & University of South Carolina Center for Teaching Excellence. (2025). Teaching with AI: A self-paced course for faculty. Auburn University. biggio.auburn.edu
CAST. (2018). Universal Design for Learning guidelines version 2.2. udlguidelines.cast.org
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2–3), 87–105.
Knowles, M. S. (1984). Andragogy in action: Applying modern principles of adult learning. Jossey-Bass.
Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall.
Lodge, J. M., Bearman, M., Dawson, P., Gniel, H., Harper, R., Liu, D., McLean, J., Ucnik, L., & Associates. (2025). Enacting assessment reform in a time of artificial intelligence. TEQSA. teqsa.gov.au
Mayer, R. E. (2009). Multimedia learning (2nd ed.). Cambridge University Press.
OECD & European Commission. (2025). Empowering learners for the age of AI: An AI literacy framework for primary and secondary education [Review draft]. ailiteracyframework.org
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285.
Tai, J., Ajjawi, R., & Boud, D. (2018). Developing evaluative judgement. Higher Education, 76, 467–484.
University of South Carolina Center for Teaching Excellence. (2026). Teaching and learning with generative artificial intelligence: Webinar series. sc.edu
Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research, and Evaluation, 2(2).

ADL6001 · Victoria University · 2026

Professional Portfolio