AI Assignments That Meet Students Inside Your LMS

Build role-play simulations and LLM-interaction assignments with rubric-aligned grading, few-shot exemplar calibration, and instructor review — delivered seamlessly through LTI 1.3 in any compliant LMS.

See LTISim in Action

Watch how instructors create AI simulation assignments and how students experience them — all within their LMS.

Demo video coming soon

One platform. Two ways to learn with AI.

Whether students are role-playing with an AI persona or learning to use AI as a tool, LTISim runs both through the same rubric grading, exemplar calibration, and instructor review pipeline.

ROLE-PLAY WITH AN AI PERSONA

Simulations

Students hold a multi-turn conversation with an instructor-defined AI character. The full transcript is captured and graded against your rubric — so assessment reflects how students actually handled the interaction, not just whether they reached a conclusion.

  • Instructor-authored persona, scenario, and learning objectives
  • Grounded in your course materials and guardrails
  • Full transcript graded against a rubric you control
Example

A nursing student conducts an intake interview with a patient persona — graded on active listening, clinical reasoning, and bedside manner.

Delivering Difficult Feedback
Alex — Team Member
I felt rushed at the end of the project. Can we talk?
Thanks for bringing that up. What do you think led to the rushed feeling?
The scope kept shifting, and I didn't feel I could push back…
Active Listening
25 pts
Excellent25
Good18
Developing10
Not Met0
Exemplars (2)— calibrate grading
Session #284
Pending Review
Professionalism
22/25
Constructive Framing
18/25
Active Listening
25/25
Action Planning
20/25

Key Capabilities

Purpose-built for higher education — every feature designed around instructor needs and institutional requirements.

LTI 1.3 Native Integration

Launches directly from any LTI 1.3 compliant LMS. Students stay in their course. No separate platform to manage.

Instructor-Configurable Scenarios

Instructors define the simulation context, AI persona, supporting documents, learning objectives, and assessment criteria. No technical expertise required.

AI-Powered Personas

Students interact with AI that stays in character, references course materials, and responds dynamically to student decisions within pedagogically structured guardrails.

Few-Shot Exemplar Calibration

Anchor AI grading to concrete examples. Promote real student work into rubric exemplars with one click, and the grader uses them as calibration anchors on every future evaluation.

Instructor Review & Regrade

Route AI grades through a pending-review queue. Edit rubrics, add exemplars, and bulk-regrade drafts before any score reaches the gradebook.

Human-in-the-Loop Design

Instructors maintain full oversight. AI assessment supports but does not replace instructor judgment. Review workflows ensure quality and accountability.

Built for Experiential Learning Across Disciplines

AI-powered simulations that meet learners where the stakes are highest.

AI Literacy & Prompt Engineering

Students across disciplines learn to use LLMs responsibly through graded prompting tasks — every prompt, iteration, and final artifact captured for review.

Healthcare Education

Nursing and medical students practice patient interactions, clinical decision-making, and bedside manner in realistic AI-powered scenarios.

Business & Management

Students navigate negotiations, stakeholder meetings, leadership challenges, and ethical dilemmas with AI personas representing clients, executives, or team members.

Social Work & Counseling

Practice intake interviews, crisis intervention, and client communication in safe, repeatable simulation environments.

Teacher Preparation

Pre-service teachers practice parent-teacher conferences, classroom management conversations, and IEP meetings.

Compliance & Professional Training

Workforce learners practice regulatory scenarios, customer interactions, and safety protocols with immediate AI-driven feedback.

Evidence-Based Development

LTISim is committed to rigorous evaluation of AI-driven assessment and simulation effectiveness. We are currently conducting validation studies comparing AI rubric-aligned assessment with human grader evaluation.

Our current validation work measures the impact of few-shot exemplar calibration on AI-rubric agreement with human graders across both simulation and LLM-interaction assignments.

We welcome partnerships with institutions interested in contributing to the evidence base for AI-powered experiential learning.

Validation Studies

Comparing AI rubric-aligned assessment with human grader evaluation across multiple disciplines and assignment types.

Research Partnerships

Collaborating with institutions to build the evidence base for AI-powered experiential learning.

Partner With Us

We're seeking institutional partners for research collaborations. If you're interested in bringing AI-powered simulations to your learners, we'd love to hear from you.