The Platform
Built for India's Reality
A 3D virtual campus accessible from a browser, a WebXR device, or a VR headset. Under 10MB. Runs on low-end hardware. Designed for bandwidth constraints.
Designing for 131st Place
India has 624 million active internet users. It ranks 131st out of 139 countries in mobile internet speed.
Power outages average 5.2 hours per month in some regions. The VR headset market remains early-stage globally. Most affordable laptops have limited CPU and GPU power.
We built for India’s constraints from the first line of code. The question was never “most impressive.” It was “most powerful experience on hardware and bandwidth that actually exists in Indian students’ homes.”
PME: Accessibility as Architecture
The Progressive Metaverse Experience is our core technical framework. It delivers the maximum possible experience at every bandwidth and hardware tier.
Differential Video
Traditional video conferencing sends the same stream to all participants regardless of connection speed. This causes lag, buffering, and degraded experience for anyone without strong bandwidth. PME uses differential video delivery. The instructor and active speaker’s video is prioritized and relayed first. Peer video is delivered only when available bandwidth allows. Every student sees and hears the person speaking, regardless of connection quality.
Non-Photorealistic Design
A deliberate engineering choice, not a budget constraint. Photorealistic environments require more triangles per scene, more draw calls, more GPU demand. A metaverse that only runs on gaming PCs excludes the students who need it most. Invact’s visual design is intentionally simple and stylized. Fewer polygons. Fewer draw calls. Lower GPU requirements.
Custom Physics Engine
Off-the-shelf physics engines (Unity, Unreal) are built for realistic collision, gravity, and particle effects. A virtual campus needs none of that. We built a custom physics engine from scratch. It handles avatar movement, spatial boundaries, and simple interactions. Nothing more. Every saved CPU cycle goes toward smooth rendering and network performance.
Lightweight Build (< 10MB)
Traditional metaverse experiences require downloads of hundreds of megabytes. For students with limited storage and slow connections, this is a hard barrier. The Invact Metaversity environment is under 10MB. Browser caching prevents re-downloads on return visits. A student on a ₹30,000 laptop with a patchy connection can load the full campus without hitting storage or bandwidth walls.
Metaverse Lite
For students on the lowest-end devices or poorest connections, even the optimized full experience may not perform well. Metaverse Lite is an automatic fallback that strips the 3D layer entirely. It delivers the core experience through audio, video, and text chat. Students in Lite mode participate in the same sessions, hear the same content, and interact with the same peers.
Real-Time Performance Monitoring
The platform monitors frames per second on the client device. If FPS drops below a defined threshold, the system recommends switching to Lite mode. This is a real-time adaptive system. It responds to the actual performance of each student’s hardware at that moment.
Why WebGL Over Unity or Unreal
Building a metaverse requires choosing a rendering platform.
Unity — Widespread adoption, large asset library, flexible. Requires application download or plugin in many configurations. Over 3% of VR/AR applications are built on Unity.
Unreal Engine — Superior graphics fidelity, advanced physics. 7.5 million users worldwide. Demands high-end hardware for its core value: photorealistic rendering.
WebGL — Browser-native 3D rendering. Open-source API. No plugins, no downloads. 2.3 million+ websites use it. Lower graphical ceiling, but zero friction to access.
Our choice: WebGL. For a platform targeting Indian students on mixed hardware with variable internet, the access model is everything. WebGL runs in a browser tab. No app store. No download. No installation. A student types a URL and enters the campus.
This decision trades graphical ceiling for universal access. Community and presence matter more than visual fidelity. A campus every student can access beats a stunning environment half of them can’t load.
One Campus. Three Entry Points.
| Tier | Access Method | Experience | Hardware Required |
|---|---|---|---|
| Browser | Any modern web browser | Full 3D campus, spatial audio, avatars, all interactions | Any laptop or desktop with a browser |
| WebXR | WebXR-compatible browser | Enhanced immersion, spatial tracking | WebXR-supported device and browser |
| VR Headset | Oculus Quest or compatible | Full spatial presence, hand tracking, room-scale | VR headset (Meta Quest etc.) |
The experience scales up with better hardware, but the core campus is fully accessible from a browser tab. A student in a tier-3 city with a ₹25,000 laptop and a mobile hotspot gets the same classrooms, breakout rooms, spatial audio, and community as a student with a Quest headset.
Sound That Creates Space
Spatial audio transforms a 3D environment from a visual novelty into a social space.
Sound is directionally located and dynamically responsive to the user’s position and movement. Walk toward a group conversation and the voices get louder. Walk away and they fade. Turn your head and the sound shifts.
This mimics encountering conversations on a physical campus. The hallway chatter, the study group you pass, the debate in the next room. It creates the conditions for overhearing something interesting and choosing to walk toward it.
Each audio source has a spatial position in the 3D environment. The client calculates relative distance and direction in real time, adjusting volume, panning, and attenuation. Audio behaves like sound in a real space, not a flat conference call.
Proof It Works on Real Hardware
93/127
Rated experience 4 or 5 out of 5
77%
Preferred Metaversity over video conferencing
38,477
Total sessions (March 2022 – March 2023)
| Metric | Value |
|---|---|
| Unique users | 268 |
| Total sessions | 38,477 |
| Messages sent | 1,512 |
| Most-used action | Mic On/Off |
| Total time spent | 521 hours |
The most-used action being microphone toggle is the key data point. Students are actively speaking, not passively watching. The platform creates conditions for dialogue.