AI Transparency & Bias Mitigation – Shiminly

Effective date: 22 September 2025 | Last updated: 22 September 2025 
This page explains how and why Shiminly Inc. ("Shiminly," "we," "our") uses Artificial Intelligence (AI) and Machine Learning (ML) features in its self paced life skills platform, the guardrails we place around those systems, and the steps we take to detect and reduce bias—especially for learners in the United States, United Arab Emirates, and India.
Bottom line: AI helps personalise learning, but humans stay in control. Our models process minimal data, do not make high stakes decisions, and are audited for fairness.

1. Why We Use AI

Feature What AI Does Human Role
Reflective Prompting Generates follow up questions that guide students to think deeper about a topic Teachers & curriculum team review and approve prompt libraries
Adaptive Recommendations Suggests the next lesson or activity based on progress Learners or parents can override suggestions at any time
Writing Feedback (Beta) Highlights tone and clarity issues in student journal entries Only advisory; students decide what to change
Analytics Dashboards Clusters anonymised engagement data to spot trends Product team validates insights before course changes
We do not use AI to grade exams, assign final scores, or determine pass/fail status.

2. AI Systems in Use

System Provider Model Type Hosting
Prompt Engine OpenAI GPT 4o API Large Language Model (LLM) EU (Ireland) region via Azure OpenAI
Recommendation Engine In house TensorFlow model Gradient boosted decision trees EU (LearnWorlds, Cyprus)
Analytics Clustering Google Cloud AI Platform K Means & PCA US East1 (de identified data only)
All providers are under Data Processing Agreements (DPAs) and Standard Contractual Clauses (SCCs) for cross border data transfers.

3. Data Minimisation & Privacy Controls

  • Text only: AI features ingest text responses & progress metadata—never photos, videos, or precise location.
  • Age segmentation: Learners’ ages are passed as ranges (7 9, 10 12, etc.), never exact birth dates.
  • No persistent IDs: Random, ephemeral identifiers replace direct student IDs before data reaches third party models.
  • Phase 1 jurisdictions: Data of UAE and India learners resides in EU data centres; sensitive fields (email, payment) are excluded from AI pipelines.

4. Bias Assessment & Mitigation

  1. Pre deployment audits: Every new model goes through a 4 step fairness checklist based on NIST AI RMF and OECD AI Principles.
  2. Test datasets: Balanced samples across gender, region (USA, UAE, India), and age group ensure prompts do not favour one cohort.
  3. Adversarial testing: We probe models for culturally insensitive or biased outputs; flagged content is added to retraining sets.
  4. Continuous monitoring: Automated detectors scan 5 % of live outputs daily for toxicity, hate speech, or gendered language.
  5. Human review SLA: Any flagged incident is triaged by an educator within 24 hours.

5. Human Oversight & Decision Boundaries

  • AI suggestions are labelled “AI generated” with a Why you’re seeing this tooltip.
  • Learners, parents, or teachers can dismiss or regenerate any AI prompt.
  • No automated disciplinary actions or grade penalties are issued by AI.
  • Final certificates are generated only after manual verification of completion metrics.

6. Explainability & Rights

Learners (or their guardians) may request a plain language explanation of:

• What data was used by the AI feature
• How the model influenced a recommendation or prompt
• What safeguards were in place

Email ai-explain@shiminly.com with your request; we reply within 7 business days.

7. Security Measures

  • All API calls use TLS 1.2+ and are authenticated via short lived tokens.
  • Output logs are encrypted at rest (AES 256) and retained for 30 days for audit purposes.
  • Access to AI pipelines is restricted to the AI engineering team via role based IAM.

8. Audit & Accountability

Activity Frequency Owner
Model performance & bias review Quarterly Head of AI & Learning Science
External penetration test Annually Independent security firm
Policy review & update Annually or when laws change Data Protection Officer

9. Roadmap (2025–2026)

Quarter Planned Improvement
Q4 2025 Launch multilingual reflective prompts (Hindi, Arabic) with same bias filters
Q2 2026 Add explainable AI visual overlays for recommendation engine
Q3 2026 Open parent/teacher feedback loop to rate AI prompt usefulness

10. Questions & Contact

  • AI ethics & transparency: ai explain@shiminly.com
  • India (DPDPA): DPDPA@shiminly.com
  • UAE (PDPL): PDPL@shiminly.com
  • USA: support@shiminly.com

    © 2025 Shiminly Inc. All rights reserved.