Responsible AI
AI at Novana is designed to augment educator judgment, not replace it. We're transparent about what AI does, how it handles your data, and where humans stay in control.
How We Use AI
AI as an Augmentation Tool
Accreditation is a human process. AI handles the repetitive parts so educators can focus on narrative and judgment.
- Evidence matching β AI suggests which requirements a piece of evidence supports
- Portfolio digests β AI drafts summaries of evidence coverage per requirement
- Improvement suggestions β AI surfaces gaps and actionable next steps
- Every output is a starting point for review, not a final answer
Human Oversight & Editability
The accreditation narrative is yours. AI drafts the raw material, your team shapes it.
- Edit, override, or delete any AI-generated content
- Reassign evidence matches to different requirements
- Rewrite portfolio digests in your own voice
- Dismiss or modify improvement suggestions
Transparency & Labeling
Nothing is presented as human-authored when it isn't.
- AI-generated descriptions show an "AI" indicator
- Requirement matches include confidence context
- Improvement suggestions are clearly labeled as AI-generated
- Users always know when they're looking at AI output
Data & Privacy Safeguards
Your Data Is Never Used for Training
Every AI provider we work with has signed zero-retention API agreements.
- Content is never used to train, fine-tune, or improve any model
- Data processed in volatile memory during a single request
- No provider stores your content after the response is returned
PII Redaction Before Processing
By the time content reaches an AI provider, personal identifiers have been stripped.
- Multi-layered detection identifies and redacts names, emails, and identifiers before processing
- AI models only see the educational substance, never personal data
Multiple Model Providers
Your school shouldn't depend on a single AI company's uptime.
- We work with OpenAI, AWS Bedrock, and Google AI
- All providers meet the same privacy and zero-retention requirements
- We can route around outages and pick the best model for each task
Continuous Evaluation
We continuously evaluate and improve AI accuracy across institutional contexts. When we identify issues, we address them directly. Improvements are deployed automatically for all customers without requiring updates on your end.
Questions? Concerns? Ask us anything.
Whether it's a security question, a compliance requirement, or something that doesn't fit neatly into a category β we're here to help. No question is too small.
security@novana.io