Skip to content

inlibre/inclearn

Logo of IncLearn: Where Inclusion Meets Learning

Documentation: View project documentation License: View project license CI Status: View build status Last Commit: View commit history GitHub Stars: View repository stars

IncLearn

IncLearn (a combination of "Inclusive" and "Learn") is an intelligent and inclusive learning platform tailored for the Indian audience. It aims to facilitate accessible digital learning experiences across educational institutions, NGOs and corporates, especially for people with disabilities and non-native English learners.

The platform is informed by lived experience with disabilities and insights from our work and research with people with disabilities in South India, which guides our commitment to an inclusive experience for users of all abilities from research to engineering.

Key Highlights

  • Accessibility-first LMS: Designed with and for learners with visual, hearing, cognitive, motor and neurological disabilities.
  • AI-powered accessibility automation: Uses Azure AI Foundry services (Cognitive Services and Azure AI Agent) to evaluate and enrich course content for accessibility.
  • Multimodal learning: Supports text, audio, video, documents, images and sign language content.
  • Vernacular learning support: Localized interface with content translation and summarization in Indian languages such as Tamil, Hindi and Malayalam.
  • Personalized accessibility preferences: High contrast mode, OpenDyslexic font, reduced animation, safe viewing mode and adaptive learning formats with keyboard shortcuts.
  • Built with lived experience: Designed using research and collaboration with disability communities in South India.

Table of Contents

Background and Motivation

Research

IncLearn was developed with lived experience and interactions with people with various disabilities and learning difficulties in learning cohorts, educational institutions and workforce, across Tamil Nadu and Kerala.

Across 3 months, the barriers faced by people with visual, hearing, neurological (photosensitive epilepsy), motor, cognitive (autism, ADHD, dyslexia, etc.) disabilities were researched by working with individuals and professionals with disabilities, learning institutions and NGOs dealing with disabilities.

Inspired by platforms such as LearnInclusive in addressing accessibility in education in Finland, we realized the lack of inclusive technological ecosystem for India, especially in educational sector, considering:

  • 27% of disabled children have never attended school
  • Only 36% of disability population is employed
  • There are 6-8% of people with disabilities in the country
  • There is an increase in demand for accessible learning materials in organizations and institutions for compliance and inclusion reasons.

Philosophy

IncLearn is designed and developed by the following philosophies:

Nothing for us, without us

IncLearn is developed with the understanding that an inclusive platform is built better with first-hand lived experiences and informed decisions from other users with disabilities. Thus, all our design and development are in alignment with accessibility requirements and is versatile to cater to other users.

Accessibility is the default

Accessibility is not a "nice-to-have" or a "feature". It is considered as the guiding principle for the platform, from research to development. We do this because accessibility is versatile to serve a larger audience, while also addressing the largely underserved Indian population, primarily due to lack of emphasis on vernacular languages and accessibility for Indian context (think Indian Sign Language, Indic NLP and so)

Design

Inclusion and Accessibility

IncLearn was designed to cater to diverse learners and instructors to ensure inclusive participation in learning and education. Thus, multiple learning formats are supported by the platform for course instructors to provide and learners to utilize.

Accessibility is made a seamless, integrated and automated process in authoring workflow. This is achieved by automating alternate content (alt text, captions, transcriptions, audio, text, summary) using AI, if alternate content aren't provided by the instructors.

Accessibility of the authored materials are checked against various levels of WCAG success criteria using AI and multimedia processors, which is persisted in storage for compliance reporting and identifying areas of improvement in accessibility of course content.

User Experience (UX)

The interface of IncLearn is designed with accessible component libraries, assistive technology testing during development and supports shortcuts which make it versatile for end-users. The platform strives to meet WCAG 2.2 AA standards to ensure inclusion and maintain compliance with regulatory requirements. The interface is designed to cater to personalized accessibility preferences across the platform for an inclusive and seamless experience.

It is designed to cater to individuals and organizational needs alike for a truly inclusive experience.

Responsible AI

IncLearn adheres to responsible AI standards by not providing user data in unsanitized manner to the AI agents used by the platforms. The summarizer and translator services are designed to get minimal input from user (their preferred language of instruction) for providing tailored content. This ensures privacy and security for end-user data.

The platform removes intermediary files that are generated when the executing processes are completed to ensure no sensitive information is retained by the system.

The platform has role-based access control to ensure security and integrity of the system.

Features

Accessible Navigation

  • Keyboard Shortcuts: IncLearn is designed to provide friendly keyboard navigation support using keyboard shortcuts, making it useful for motor-impaired, screen reader users who also use keyboard and power users. This is applicable for the rich text chapter editor (TipTap) and media player (AblePlayer) integrations as well.
  • Assistive technology support: IncLearn supports screen reader navigation and relies on semantic elements and accessible Material components, with sufficient contrast and focus indicators.
  • Accessibility Preferences: Users can set their accessibility preferences. Currently supported configuration are:
    • Font size and style (default or OpenDyslexic)
    • Preferred learning mode: Text, Sign, Document, Image, Audio, Video
    • Preferred language: English, Malayalam, Hindi, Tamil (currently supported by platform)
    • Theme: Light, dark, high contrast
    • Safe viewing mode for blocking epileptic content
    • Disabled autoplay and reduced animation
    • Accommodation requirements: Extra time needed or sign support.

Course Accessibility Automation

An user can publish their course chapter and modules which executes accessibility evaluator and processor for chapters.

This provides a summary of accessibility of the material of different types, in terms of content structure, semantics, safety against epileptic seizures and generating needed alternative content (captions, transcripts, alt text and summarized content). This is evaluated against WCAG success criterion wherever applicable.

This is done using Azure AI services and agents (Accessibility Evaluator and Summarizer) for evaluation of accessibility for textual content and generating a summary that's easier to perceive for users of different cognitive abilities.

Adaptable Learning

  • Translate and Summarize Chapter: An user can access the translated or summarized version of chapters in their preferred language or other supported Indian language. This is aided by Azure AI Agent and Language Service, responsible for processing rich text and unstructured content. Summarized version provides quick overview of chapters for easier cognition.
  • Preferred mode of learning: IncLearn's interface provides chapter in preferred mode of learning wherever available. This allows better content processing for learners. Needed captions and transcriptions can be accessed in case of multimedia such as audio or video using AblePlayer, whereas images have captions for detailed description.

Course Management

  • View Course: An user can view courses they have enrolled in or are an administrator or instructor.

  • Edit Course: An user can edit course to create modules, which can have several chapters of different types:

    • Text
    • Image
    • Document
    • Video
    • Audio
    • Sign Language Video (ISL, ASL, BSL)

    This allows users of different requirements to access course materials in a seamless manner. TipTap is used for content editing for text, which provides an easier and accessible editing experience.

  • Update and publish modules: Make content accessible for learners from draft mode for iterative and timely engagement.

Organization and Learner Management

  • Enroll/Unenroll Learners: An user can enroll/unenroll users to their course using their names or email address. They can also manage the users who are engaged with their course.
  • Create and Manage Organizations: Corporate trainers can create and manage learners using organizations and teams for bulk enrollment and tracking.
  • Manage Teams: An user can create team they want to lead for learning cohorts and add/manage the required users.

Working

Content Upload

Workflow for multimedia upload in chapters

  • A user with write access to a course can create a module, which can have chapter. Each chapter can have different multimedia formats. The currently supported formats are:
    • Audio: WAV, MP3
    • Video and Sign: MP4
    • Document: PDF, OOXML (MS Office files)
    • Image: JPEG, PNG
  • The client sends the metadata of the file (file size, MIME type, name and category of upload) to the server.
  • The server validates the metadata based on upload limit and type and generates a Blob Storage SAS URL which is sent back to the client with limited validity, type and permissions for secure upload.
  • The client uploads the file directly to Azure Blob Storage without intervention of the server and it's validated based on content-md5 hash of the blob.
  • The upload status on being successful is sent back from the client to server and the server triggers necessary database update to store the file upload.
  • The blob URLs would be used later during course module or chapter publication for accessibility evaluation.

Accessibility Preferences

Authenticated users can set their accessibility preferences, which are persisted in the database. Users can configure:

  • Font size and style (default or OpenDyslexic)
  • Preferred learning mode: Text, Sign, Document, Image, Audio, Video
  • Preferred language: English, Malayalam, Hindi, Tamil (currently supported by platform)
  • Theme: Light, Dark, High Contrast (Light and Dark)
  • Safe viewing mode for blocking exposure to potentially photoepileptic content
  • Disabled autoplay and reduced animation
  • Accommodation requirements: Extra time needed or sign support.

These settings are persisted on the user's browser to ensure a safe and inclusive learning and instructing experience.

Accessibility Automation and Evaluation

Workflow for content accessibility automation and evaluation

Accessibility automation service is triggered when a course's content (module or chapter) has been published (from draft). This is responsible for ensuring the materials are processed for generating alternative media/content and accessibility violations are flagged.

Text

Rich text content that's edited by TipTap is stored in the database which is further edited under draft mode. Once the course is published, the textual content will be processed by chapter accessibility processing service to:

  • Generate the summary for the textual content for better cognition using Azure AI Agent.
  • Check for WCAG success criteria violation for the HTML content using Azure AI Agent.
  • Generate audio content for the course using Azure Speech Synthesizer, which uses MP3 format with GStreamer for synthesis of the extracted inner text from the content.

This is suitable for users who prefer condensed or auditory learning.

Image

Images are processed for the dense captions or OCR text using Azure Computer Vision for the image description and caption for the alternative text to ensure sufficient image context for users with visual impairment. This is done using the Blob URL stored in the database.

Document

  • Document's content is retrieved using Azure Document Intelligence (using the prebuilt-layout model), which provides page count, entitites, scanned documents and reading order.
  • The extracted text and the retrieval structure is used for the accessibility summary and the results for accessibility are stored as metadata for record keeping.

Video and Sign

Video is evaluated for triggers for photosensitive epilepsy using iris-pse-detection library by downloading the video from blob storage. This is responsible for checking luminance/red/extended flashes or harmful spatial patterns and provides check status and lists violations, if any. This is used for flagging potentially epileptic content for users who use safe mode.

Video is processed using Azure Video Indexer for generation of transcriptions which is stored as WebVTT for compatibility with AblePlayer. Enriched descriptions are generated using the connected Azure OpenAI service and indexing happens for the 4 supported languages by the platform for laster recognition.

Audio

Audio that do not have transcriptions will have the transcriptions generated using Azure Speech Service. This is achieved by batch transcription API for async processing, which is optimal for longer audio uploads.

Content Translation

IncLearn supports translation of course materials into multiple Indian languages.

Using Azure Language Services and Azure AI Agent, rich text chapter content can be translated into the following supported languages:

  • Tamil
  • Hindi
  • Malayalam
  • English

This allows learners who are not fluent in the language used in the course to access course materials in their preferred language, aiding vernacular learning.

Architecture

High-level architecture of IncLearn

Minimalism

IncLearn is designed to be deployed with minimal configuration. Thus, the platform is developed with a monolithic architecture. As a result, the moving components are reduced and the server is kept minimal to reduce maintenance, monitoring and troubleshooting.

Concurrency

IncLearn's server uses simple database-based background tasks for handling processing status and information for content accessibility evaluation and processing.

This improves performance by handling other concurrent operations while the tasks are executed in background.

Performance

The file uploads are handled directly by Azure Blob Storage for scalable uploads with reliability, which the clients uploads to using a SAS URL generated by the backend using the file's metadata for validation.

Technologies Used

Frontend

  • Next.js for developing an accessible, modular, performant and providing an optimal web experience with server-side rendering (SSR).
  • Material UI for developing accessible UI components with a consistent and familiar UX.
  • next-intl for internationalization (i18n) of user interface in Indian languages.
  • TipTap for providing headless accessible components for our chapter editor with rich text support and keyboard shortcuts.
  • AblePlayer for providing accessible video and audio player with support for captions, transcriptions and keyboard bindings.

Backend

  • FastAPI for API server with automatic OpenAPI integration for better developer experience with Scalar Documentation and scalable backend development with typing by Pydantic.
  • Background Tasks of FastAPI which uses Starlette for serverless, independent and cost-effective computation and data cleaning to ensure privacy and moderate user generated content for accuracy and toxicity.
  • Azure CosmosDB for PostgreSQL for storage of structured and relational data of users, courses and organizations.
  • Azure SDK for Python for interaction with several deployed Azure services for functionality of the application.
  • iris-pse-detection for photosensitive epilepsy evaluation based on EY's IRIS implementation, for Python.

Infrastructure and SDK

  • Azure AI Foundry for management of Azure AI Services such as Azure OpenAI (GPT-5.2-chat for Azure AI Agent) and Azure Cognitive Services.
    • Azure AI Agent with Azure Agent Framework for accessibility evaluation, summary generation and translation using Language Services agent tool.
  • Azure Cognitive Services for language translation, text analytics, accessibility features such as text-to-speech and speech-to-text.
    • Azure Document Intelligence for document processing and extraction for evaluation of accessibility of document of different types (structure, semantics and layout for different document)
    • Azure Computer Vision for image processing for generation of alternative text, captions and descriptions.
    • Azure Language Services for usage
    • Azure Speech Service for audio processing for transcription generation for audio chapters and audio synthesis using audio backends (GStreamer) for textual chapters
    • Azure Video Indexer for video indexing, description with Azure OpenAI and transcription generation.
  • Azure Container Registry for storage of Docker container images for continuous deployment with GitHub Actions to Azure App Service.
  • Azure App Service for containerized, scalable and reliable deployment of web services and management of API server and web application.

Screenshots

Home Page (Multilingual)

Home

Application home page dashboard Application home page dashboard.

English

Home page interface in English language Home page interface in English.

Tamil

Home page interface in Tamil language Home page interface in Tamil.

Hindi

Home page interface in Hindi language Home page interface in Hindi.

Malayalam

Home page interface in Malayalam language Home page interface in Malayalam.

Course Management

Modules

Modules management page showing course modules Modules management page showing course modules.

Courses List

Courses list page displaying available courses Courses list displaying available courses.

Create Course

Form interface for creating a new course Form interface used to create a new course.

Chapter Management

Chapters List

Chapter list page showing all chapters in a course Chapter list showing all chapters in a course.

Chapter Groups

Chapter groups management interface Interface for managing chapter groups.

Text Chapter

Text chapter page displaying written learning content Text chapter displaying written learning content.

Create Text Chapter

Editor interface for creating a text chapter Editor interface used to create a text chapter.

Video Chapter

Video chapter page with embedded learning video Video chapter with embedded learning video.

Create Video Chapter

Form for adding a new video chapter Form used to add a new video chapter.

Audio Chapter

Audio chapter page with audio playback controls Audio chapter with playback controls.

Create Audio Chapter

Form interface for creating an audio chapter Form used to create an audio chapter.

Document Chapter

Document chapter page displaying attached document content Document chapter displaying uploaded document content.

Create Document Chapter

Upload interface for creating a document chapter Upload interface used to create a document chapter.

Sign Language Chapter

Sign language chapter page with sign language video content Sign language chapter displaying sign language video content.

Create Sign Language Chapter

Interface for creating a sign language video chapter Interface used to create a sign language video chapter.

Organization & Teams

Create Organization

Form for creating a new organization Form used to create a new organization.

Teams

Teams management page listing organization teams Teams management page listing organization teams.

Create Team

Form interface for creating a team within an organization Form used to create a team within an organization.

Settings & Accessibility

Accessibility Guide

Accessibility guide page explaining accessibility features Accessibility guide explaining platform accessibility features.

Settings

Application settings page Application settings page.

Edit Settings

Settings editing interface with configurable options Interface used to edit application settings.

Themes

OpenDyslexic Font

Application interface using OpenDyslexic accessibility font Application interface using the OpenDyslexic accessibility font.

Light Theme

Application interface displayed in light theme Application interface displayed in light theme.

Dark Theme

Application interface displayed in dark theme Application interface displayed in dark theme.

Light High Contrast

Application interface using light high contrast theme Application interface using a light high-contrast theme.

Dark High Contrast

Application interface using dark high contrast theme Application interface using a dark high-contrast theme.

Get Started

  1. Clone the repository
git clone https://github.com/inlibre/inclearn
cd inclearn
  1. Fill the necessary credentials for the server by following through the instructions at the server's README
  2. Fill the necessary credentials for the web application by following through the instructions at the web application's README
  3. Run the database alone for running migrations as specified in server setup instructions
  4. Run the needed services
docker compose up

The web application can be accessed at http://localhost:3000.

The server can be accessed at http://localhost:8000.

Challenges

  • Cross-platform AT testing and compatibility: Ensuring compatibility across different environments (operating system, AT, and browsers) by accessibility testing.
  • Ensuring accessibility during development: Usage of accessibility linters and AI assistants to provide accessible components, which required care during integration to avoid redundant ARIA labels or other misconfigurations with accessibility.
  • Performance with background AI processors: Spawning BackgroundTasks and polling using database proved to be useful for shorter scale and aligned with our motto to reduce moving components. However, as the workers are increased and the tasks are higher, there were latency spikes with database pooling and memory issues in containerized environments. This proves to be a challenge for ensuring monolithic deployment.
  • External dependencies: Currently, several multimedia libraries are used for evaluation of accessibility and processing content, which increases application complexity.
  • Theming accessibility: Ensuring every component's color combination is accessible against WCAG 2.2 AA success criteria while supporting multiple themes posed to be harder as the interface components increased.
  • Standardizing Accessibility Metrics: Accessibility evaluation is performed using multiple tools and AI services, which produce results in different formats and scoring systems. To avoid inconsistencies, the current system stores these evaluation results as structured metadata in the database rather than rendering them directly in the UI. A standardized accessibility reporting service that normalizes these metrics across tools is currently under development.

Accessibility

If you encounter any accessibility barriers while accessing the interface, please write to us at support@inlibre.io

IncLearn is an experimental platform and is under heavy iteration. While we have tested core workflows with assistive technologies, since accessibility is an ongoing process, there could have been a chance that some accessibility issue are not fixed and requirements.

We welcome feedback for improving accessibility of our platform for all users.

Roadmap

IncLearn is currently under rapid iteration stage and is not suitable for production usage.

Our current priorities are:

Phase 1: Feature completion (May 2026)

  • Support assignment and test submissions.
  • Store generated video index and transcriptions for the currently supported Indian languages for ease of usage.
  • Increase accessibility accommodations.
  • Support accessibility profiles for preferences, for a better UX.
  • Comprehensive document accessibility coverage using VeraPDF.
  • Standardize internationalization for supported Indian languages.
  • Introduce course announcements for instructors.
  • Support themes for color vision deficiency and color blindness.
  • Performance improvement for background processing.
  • Perform accessibility testing and integrate continuous accessibility mechanism for development.

Phase 2: Pilot Testing (July 2026)

  • Conduct pilot testing and iteration across 2 deaf schools, a learning cohort, and blind schools in Kerala and Tamil Nadu.
  • Research more on people with multiple disabilities and functional disabilities.
  • Conduct focus groups for users with learning disorders.

Long Term Goals

  • Support dynamic Indian Sign Language generation, by leveraging existing work of AI4Bharat and other organizations.
  • Develop a learning studio application for inclusive authoring workflow for IncLearn.

Contributing

IncLearn in its spirits, aims to be an inclusive platform for users of all abilities, welcomes contribution to develop open technologies for India. You can help us in the following ways:

  • Spread the word on social media and your network.
  • If you're an educator or learner with disability or linguistic barriers, you can provide insights on accessibility and challenges in learning environments.
  • If you're an accessibility developer, tester or assistive technology user, you can provide feedback on accessibility of the platform.
  • If you're a native speaker of any of the Indian languages used on our platform, you can help with translations.
  • If you're a developer, you can propose features, file encountered bugs and fix issues with the platforms after discussing the changes with the maintainers. For contributing code or documentation, check our CONTRIBUTING guide
  • Contribute to our platform usage documentation, hosted at https://docs.inclearn.inlibre.io

By contributing or engaging with this project, you are expected to comply with our code of conduct.

License

IncLearn is licensed under the GNU Public License version 3 or later.

For more information on licensing, check the LICENSE file.