RAG-powered academic assistant delivering contextual course-specific support within a Learning Management System.
Traditional Learning Management Systems lack intelligent, context-aware academic support. Students often struggle to find relevant information within course materials, and general-purpose AI assistants provide answers that aren't grounded in the specific curriculum. The challenge was to build a system that could answer student questions using only their module-specific learning materials while maintaining academic integrity through governance controls.
The solution was architected around Retrieval-Augmented Generation (RAG), a technique that grounds AI responses in actual course documents. The pipeline involves: document upload and parsing (PDF/DOCX), automated chunking and tokenisation, vector embedding generation using Ollama, and semantic similarity search using pgvector on Neon PostgreSQL. A governance layer was implemented to detect and refuse requests that violate academic integrity, such as direct assignment completion.
Built with Next.js 15 and TypeScript for a scalable full-stack architecture. The database layer uses Neon PostgreSQL with pgvector extension for vector similarity search, managed through Drizzle ORM. Ollama handles local AI inference and embedding generation, enabling the system to run without external API dependencies. The frontend uses Tailwind CSS for responsive, accessible interfaces across all devices.
The system successfully demonstrates how RAG can be applied to educational contexts, providing accurate, curriculum-grounded responses while enforcing academic integrity. The modular architecture supports future integration with existing LMS platforms, and the governance framework provides a foundation for institutionally controlled AI usage in education.