Skip to content
Responsible AI

AI That Puts Students First.

We believe AI in education must be safe, transparent, and under human control. Every AI feature in CampusBridge is built on these non-negotiable principles.

Our Principles

Six Non-Negotiable AI Principles

These principles are enforced architecturally — not just as policy documents.

Transparency

Every piece of AI-generated content is clearly labelled. Parents and staff always know when they're reading machine-generated text.

Human Oversight

AI drafts require human review before publishing to students or parents. No automated delivery without staff approval.

Student Privacy

PII is pseudonymised before any external AI call. "Alex Smith" becomes "Student #A7B2" — reversed only server-side.

Fairness

Every AI output passes content moderation before delivery. Biased, harmful, or inappropriate content is blocked and logged.

Cost Transparency

Per-school real-time AI usage dashboard. Schools see exactly what they spend, set budgets, and control which features use AI.

Data Minimisation

Medical records, NDIS plans, allied health notes, and counsellor data are never sent to external AI — even pseudonymised.

Privacy Architecture

How Pseudonymisation Works

Student identities are never exposed to external AI providers. The mapping lives server-side only.

Original Data

"Alex Smith"

"Year 5, Room 12"

"Mrs. Johnson's class"

Server-side

Pseudonymised

"Student #A7B2"

"Group 5, Room R12"

"Teacher #T4's class"

API call

External AI

Only sees pseudonyms

Mapping reversed server-side on return. External AI never sees real student names.

Access Rules

What AI Can and Cannot Access

Clear boundaries enforced at the architecture level — not just policy.

Data Type
Classification
External AI
On-VPC AI
Notes
School name, term dates
Public
No restrictions
Student names, emails
Personal
Pseudonymised before external API
Attendance patterns
Personal
Aggregated / pseudonymised
DOB, medical conditions
Sensitive
Never sent to any AI
NDIS plans, allied health
Sensitive
Never sent to any AI
Counsellor case notes
Restricted
On-VPC Llama 3 only, AES-256
Safety

Content Moderation Pipeline

Every AI output is checked for harmful content before it reaches students or parents.

1

AI generates content

Draft report comment, assessment marking suggestion, newsletter paragraph, or wellbeing suggestion.

2

OpenAI Moderation API check

Content screened for violence, self-harm, hate speech, sexual content, harassment.

3

Pass / Fail gate

Violations are logged and never delivered. Clean content proceeds to human review.

4

Human review

Staff member reviews, edits, and approves before publishing to families.

Moderation violations are logged for audit. Repeat violations trigger alerts to the school's IT admin. Content is never silently delivered.

School Control

Schools Stay in Control

Every school has full visibility and control over AI usage, costs, and feature enablement.

Usage Dashboard

Real-time view of AI calls, token usage, and costs broken down by feature and user.

Budget Controls

Set monthly AI spend limits. Automatic alerts at 80% and hard-stop at 100% of budget.

Feature Toggle

Enable or disable AI features individually. Start conservative, expand as you build confidence.