Quality and responsible AI at MagicSchool

Sub: MagicSchool’s AI features are designed to support educators with high-quality, standards-aligned instructional assistance across 80+ teacher tools and 50+ student tools. Quality at MagicSchool means more than functionality—it includes responsible AI governance, human oversight, continuous evaluation, and proactive safeguards.
AI outputs are designed to be a starting point for educator judgment, never a final decision.
Responsible AI overview
Purpose and use
MagicSchool provides AI-powered tools that support instructional planning, content creation, assessment design, differentiation, and communication. A multi-model architecture (including OpenAI GPT-4o, Anthropic Claude, Google Gemini, and others) is used, pairing each tool with the model best suited for its task. Internal systems supplement large language models with up-to-date standards, curriculum guidance, and trusted documentation.
Primary users
- Educators and school staff (teachers, administrators, instructional leaders) are the primary users.
- Students access AI through MagicStudent, a supervised and educator-managed environment. Student access occurs only through educator-assigned and monitored accounts.
Human oversight
- Educators and schools remain responsible for reviewing AI-generated outputs and exercising professional judgment before classroom use.
- AI outputs are drafts, not final instructional decisions.
AI use boundaries (What we never do)
MagicSchool does not use AI to:
- Grade students: AI outputs are drafts for educator review, not final evaluations.
- Make placement, discipline, or eligibility decisions: No automated decision-making that produces legal or similarly significant effects.
- Create student risk profiles: No profiling of students except where strictly necessary for an educational service requested by a school, and never for advertising or commercial decision-making.
- Perform high-stakes automated decision-making: We do not engage in automated decision-making or profiling that produces legal or similarly significant effects.
- Employ manipulative, deceptive, or "dark-pattern"design practices that could pressure users into sharing more data than necessary or making unintended choices. The platform is designed to support informed, educator-directed use, with clear disclosures, straightforward settings, and district-controlled governance. We do not use design tactics that obscure privacy options, encourage unnecessary data submission, or promote engagement at the expense of student well-being. Our approach prioritizes transparency, user control, and responsible technology use in educational environments.
Model training and data use commitments
- Customer/student data is NOT used to train 3rd party models: MagicSchool does not use personal information to train artificial intelligence or machine learning models.
- Third-party AI providers: MagicSchool does not allow any large language model provider, including OpenAI, to store or train on educator or student data. Providers are contractually required to delete data immediately after processing.
- Data retention for AI interactions: Data is retained only as long as necessary to provide the requested services or as required by law.
- Schools and districts can request data deletion or export at any time via [email protected].
- Student interactions are stored securely to support teacher visibility and classroom oversight.
- Only authorized educators and administrators can view interactions.
Safety controls and content safeguards
Input/output filtering
- Multiple layers of content moderation to keep interactions safe and age-appropriate
- Each AI model includes its own built-in safeguards
- Additional MagicSchool-specific filters tailored to school settings to block or decline inappropriate content and requests
Abuse prevention
- Multi-layered moderation system
- Moderation is always active in MagicStudent
- Multiple classifiers, word/phrase detection, and AI analysis detect concerning messages in real time
Age-appropriate safeguards
- Moderates by age and context
- Designed for safe use across K-12
- Students access only through educator-assigned and monitored accounts
Restricted content categories
- Content filtering to block inappropriate material
- PII protection with encryption and secure data handling
Teacher-in-the-loop review
- Educators and district admins can view all student interactions in real time
- Student Room Insights for monitoring active sessions
- In-app best practice prompts shown before use (check for bias/accuracy, protect privacy, don't upload PII, AI is a tool, not a replacement)
Escalation workflow for harmful outputs
- Teachers and admins are alerted via in-app notification and email so they can take appropriate action
- Enterprise admins can tailor moderation scope and rules to local district policies via their Customer Success Manager
Quality assurance and model evaluation
Accuracy and usefulness testing
- Multi-model approach: Each tool is paired with the best-performing model for its task.
- Models are continually tested for quality, safety, and reliability.
- Internal systems supplement LLMs with up-to-date standards, curriculum guidance, and trusted documentation.
Bias and fairness considerations
- MagicSchool recognizes that AI-enabled features may perform differently across individuals and groups and may produce unintended or disparate impacts.
- Reasonable steps to evaluate and reduce risk of unfair outcomes: Testing, monitoring, and selecting models/configurations for educational suitability, safety, and reliability.
- Multi-layered moderation system in partnership with leading 3rd party AI providers to reduce bias and misinformation.
Classroom appropriateness review
MagicSchool applies a continuous improvement cycle: Framing → Auditing → Refining
- Framing: Structuring prompts before they reach AI models.
- Auditing: Checking outputs for safety and accuracy.
- Refining: Updating safeguards as usage patterns evolve.
Monitoring for drift or degradation
- Continuous improvement of moderation systems
- Feedback from 6M+ user community helps identify and address issues
Internal evaluation benchmarks
Robust library of internal and external AI evaluations focused on a variety of safety and quality vectors:
- Hallucination detection/Factual accuracy
- Bias and diversity
- School appropriateness
Frequency of testing
- We run LLM evaluations and quality control daily
- Frequent AI and LLM Research on critical safety and quality topics
Human oversight and educator control
AI outputs are drafts, not final instructional decisions.
MagicSchool displays best practices upon login for both teachers and students:
- Check for bias and accuracy: AI isn't perfect. It might produce biased or incorrect information. Always review before sharing with students.
- Use the 80/20 Rule: Let AI handle the initial 80% as your draft, then add your final touch as the last 20%.
- Trust your judgment: Use AI as a starting point, and not the final solution. Always adhere to your school's guidelines.
- Protect student privacy: Never include student names or personal information in your prompts. We strive to promptly remove any personally identifiable information that is accidentally shared.
- AI is a tool, not a replacement for your thinking (student-facing language).
Educators review and edit before use
- Educators and schools remain responsible for reviewing outputs and exercising professional judgment.
- In-app prompts remind users to check for bias and accuracy before sharing with students.
- One-click exports to Google and Microsoft for editing before classroom use.
UI controls that reinforce educator agency
- In-app best-practice prompts displayed during login (for educators and students).
- Teacher visibility into all student AI interactions in real time.
- Student Room Insights for monitoring engagement.
- District admin controls over which tools/features/integrations are enabled.
- Customizable moderation controls for district admins.
Training or guidance provided to teachers
- Free AI certification courses
- Professional development programs
- MagicSchool Pioneers program (educator community)
- Webinars and blog content
- In-app best practice prompts
- Raina (AI instructional coach) for 24/7 educator support
Transparency to users and districts
When users interact with MagicSchool AI:
- All tools are clearly labeled as AI-powered.
- In-app prompts notify users of AI best practices before use.
- Students see: "Your teacher can see your activity in MagicSchool."
Disclosures that outputs may be imperfect
- "AI might occasionally produce biased or incorrect content. Always double-check before sharing with students."
- “Always review content for accuracy and bias; use professional judgment and comply with school policies.”
- "Know the limits" messaging about AI knowledge boundaries.
Documentation available for districts
- Privacy Policy
- Student Data Policy
- Data Privacy Infographic
- AI Readiness Checklist
- White paper: The AI Safety Loop for Students
- White paper: Student companionship and responsible AI in schools
- District-specific documentation available on request
Continuous improvement and feedback loops
Monitoring educator feedback:
- A community of 6M+ educators and students provides ongoing feedback
- In-app chat for suggestions
- Direct email: [email protected]
Incident reporting mechanisms
- Email [email protected] for issue reporting
- Enterprise customers work with a dedicated Customer Success Manager
Regular model/policy updates
- New tools and features are launched regularly, inspired by user suggestions
- A multi-model approach means models are tested continually and updated as needed
Ongoing safety improvements
- AI safety loop framework: Framing → Auditing → Refining (continuous cycle)
- Progressed from manual reviews to automated evaluator-driven checks
- Regular security audits
Responsible AI FAQ
Is student data required?
Not for basic educator tools. MagicSchool doesn't require or encourage users to submit PII. For MagicStudent, student accounts are assigned and monitored by teachers, and interaction data is stored to support teacher visibility.
Is district data used for training?
No. MagicSchool does not use personal information to train AI models. Third-party providers are contractually prohibited from storing or training on district data. Both OpenAI and Anthropic certify Zero Data Retention.
Does AI grade or evaluate students?
No. MagicSchool does not engage in automated decision-making or profiling that produces legal or similarly significant effects. AI outputs are drafts for educator review.
How do you address bias?
MagicSchool takes steps including: testing and monitoring models, selecting configurations that promote educational suitability and safety, using a multi-layered moderation system, partnering with leading AI providers to reduce bias and misinformation, and applying rigorous testing and transparency practices. In-app prompts remind users to check outputs for bias and accuracy.
How does MagicSchool address hallucinations or incorrect AI outputs?
AI-generated content may occasionally contain inaccuracies. To mitigate this risk, MagicSchool applies layered moderation, factual evaluation testing, and user-facing reminders encouraging review and verification. Educator oversight remains a required component of classroom use.
What model providers do you use?
MagicSchool uses a multi-model approach including OpenAI GPT-4o, Anthropic Claude, and Google Gemini, among others. Each tool is paired with the best-performing model for its task. Models are tested continually for quality, safety, and reliability.


