PranAIR is an intelligent, real-time medical emergency response platform that uses drones, live telemetry, interactive maps, and AI-assisted decision support to prioritize and reach patients faster during critical situations.
The system simulates a next-generation emergency workflow where patient SOS signals, GPS locations, and severity levels are processed to help operators and doctors make informed, time-critical decisions.
WhatsApp.Video.2026-01-18.at.8.28.38.PM.mp4
- One-tap SOS triggers emergency workflows
- Captures live GPS location instantly
- Initiates downstream medical & operator pipelines
- Real SOS patient + nearby patients using latitude & longitude
- Interactive maps with priority levels
- Blue navigation path between drone and patient (Zomato-style routing)
- Click-to-focus navigation for individual patients
- Patients ranked using:
- Medical severity level
- Distance from drone
- Determines optimal dispatch order automatically
- Battery percentage
- Altitude
- Drone status (idle, en route, airborne)
- Continuous real-time streaming
- Converts speech β text
- Sends patient input to Google Gemini
- Responds with natural AI voice
- CPU-only pipeline (no CUDA, no Whisper)
- Focused medical insights
- Injury severity interpretation
- Clean UI without operator clutter
- Optimized for fast clinical decisions
- Sends patient live location & coordinates on SOS
- Designed to notify:
- Nearby hospitals
- Emergency responders
- Control operators
PranAIR uses the Haversine Formula to calculate the real-world surface distance between the drone and patients using GPS coordinates.
This ensures:
- Accurate distance estimation
- Realistic navigation paths
- Correct handling of Earthβs curvature
- React / Next.js
- Interactive Maps (Leaflet / Mapbox-style logic)
- Framer Motion animations
- Modern glassmorphism UI
- FastAPI (Python)
- Gemini API for conversational AI
- Real-time telemetry simulation
- REST-based architecture
- CPU-only, lightweight, demo-friendly
PranAIR demonstrates how AI + drones + real-time geospatial intelligence can significantly reduce emergency response time and improve decision-making in life-critical scenarios.
Built for:
- Hackathons
- Research demos
- Smart city simulations
- Emergency response innovation
This project is a simulation and prototype intended for research, demonstration, and educational purposes only.
It is not a production-ready medical or emergency response system.
- Visual Triage: Uses Hugging Face's Qwen2-VL-7B-Instruct model to analyze emergency scenes
- Smart Reporting: Generates hospital-ready emergency reports using Gemini 1.5 Flash
- Real-time Telemetry: Simulated drone status monitoring
- Secure: API key management with environment variables
- Flight Controller: ArduCopter APM 2.8
- Onboard Computer: Raspberry Pi 4B (Edge processing & telemetry)
- Motors: 4 Γ Brushless DC Motors (1000 KV) with ESCs
- Frame: F450 / Q450 Quadcopter Frame
- Power: LiPo Battery with real-time monitoring
- Navigation: GPS module for live latitude & longitude tracking
- Camera: Forward-facing camera (simulated live feed)
AI-powered medical drone dispatch system analyzing emergency scenes for intelligent automated triage response.
A comprehensive FastAPI backend for autonomous medical emergency response using computer vision (BLIP), intelligent triage, quantum-inspired route optimization, and patient voice assistance.
- πΌοΈ AI Vision Analysis: BLIP image captioning model for deterministic emergency scene analysis
- π₯ Medical Triage System: Rule-based severity scoring (1-9 scale) with keyword detection
- π£οΈ Patient Voice Assistant: Gemini-powered conversational AI for patient interaction
- βοΈ Quantum Route Optimization: QUBO-based TSP solver for multi-location emergency dispatch
- π Real-time Telemetry: Live drone status monitoring with battery and altitude simulation
- π Secure Architecture: CPU-only inference, environment variable management, CORS configuration
- Python: 3.8 or higher (Python 3.10+ recommended)
- Operating System: Windows, macOS, or Linux
- RAM: Minimum 4GB (8GB recommended for BLIP model)
- Storage: ~2GB free space for models and dependencies
- Google Gemini API Key: For patient voice assistant functionality
- Sign up at: https://makersuite.google.com/app/apikey
git clone https://github.com/NightCrawler909/PranAIR-AI-Enhanced.git
cd PranAIR-AI-EnhancedWindows (PowerShell):
python -m venv .venv
.\.venv\Scripts\Activate.ps1macOS/Linux:
python3 -m venv .venv
source .venv/bin/activateCore Dependencies:
pip install --upgrade pip
pip install -r requirements.txtOptional Dependencies (for full functionality):
# Quantum Route Optimizer (optional)
pip install qiskit>=1.0.0 qiskit-optimization>=0.6.0 qiskit-algorithms>=0.3.0 networkx>=3.0
# Text-to-Speech (optional)
pip install edge-ttsCreate a .env file in the root directory:
# .env file
GOOGLE_API_KEY=your_gemini_api_key_hereTo get your Gemini API key:
- Visit https://makersuite.google.com/app/apikey
- Sign in with your Google account
- Click "Create API Key"
- Copy the key and paste it in
.env
Check that all core packages are installed:
python -c "import fastapi, uvicorn, transformers, PIL; print('β
Core dependencies installed')"Check BLIP model availability:
python -c "from transformers import BlipForConditionalGeneration; print('β
BLIP model available')"Method 1: Direct Python execution
python main.pyMethod 2: Using Uvicorn
uvicorn main:app --host 0.0.0.0 --port 8000 --reload======================================================================
π PranAIR Medical Drone Backend Starting
======================================================================
π¦ AI Model: Salesforce/blip-image-captioning-base
π» Device: CPU (CUDA disabled)
π― Mode: AI
β
BLIP Model: LOADED (Deterministic inference enabled)
π€ Pipeline: READY
π₯ Patient Router: True
βοΈ Quantum Optimizer: True
π Server: http://0.0.0.0:8000
π Docs: http://0.0.0.0:8000/docs
======================================================================
- API Server: http://localhost:8000
- Interactive API Docs: http://localhost:8000/docs
- Alternative Docs: http://localhost:8000/redoc
- Health Check: http://localhost:8000/health
Analyzes emergency scene image and returns medical triage assessment.
Request:
curl -X POST "http://localhost:8000/dispatch" \
-F "file=@emergency_scene.jpg" \
-F "source=uploaded_image"Response:
{
"analysis": {
"injury_type": "SEVERE - Person on ground, immediate response needed",
"severity_score": 8,
"confidence": 0.90,
"mode": "AI",
"source": "uploaded_image",
"caption": "a person lying on the ground"
},
"telemetry": {
"battery": 98.5,
"altitude": 120.0,
"status": "AIRBORNE",
"lat": 28.61,
"lng": 77.20
}
}Severity Scale:
- 9: CRITICAL - Unconscious, severe bleeding, cardiac arrest
- 8: SEVERE - Person on ground, blood visible
- 7: HIGH - Lying down, fallen, potential fracture
- 6: MODERATE-HIGH - Visible injury, medical attention required
- 5: MODERATE - Minor injury, monitoring recommended
- 3-4: LOW - Person in mild distress
- 1-2: MINIMAL - No visible emergency
Returns current drone telemetry with simulated updates.
Response:
{
"battery": 98.45,
"altitude": 121.3,
"status": "AIRBORNE",
"lat": 28.61,
"lng": 77.20,
"speed": 15.0
}Quantum-inspired route optimization for multiple emergency locations.
Request:
{
"current_location": {"lat": 28.61, "lng": 77.20, "id": "drone_base"},
"targets": [
{"lat": 28.62, "lng": 77.21, "id": "emergency_1"},
{"lat": 28.63, "lng": 77.22, "id": "emergency_2"}
]
}Response:
{
"status": "success",
"optimization_engine": "QUBO/Ising (Classical Simulator)",
"optimized_route": [...],
"metrics": {
"total_distance_km": 5.3,
"estimated_time_min": 12.5
}
}Patient interaction with Gemini-powered voice assistant.
Request:
{
"message": "I'm having chest pain",
"conversation_id": "patient_001"
}Simple health check endpoint.
Response:
{
"status": "healthy",
"ai_ready": true,
"mode": "AI"
}βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (React) β
β TacticalMapGrid + CommandCenter + Dashboard β
ββββββββββββββββββββββββοΏ½οΏ½οΏ½β¬βββββββββββββββββββββββββββββββββββββ
β HTTP REST API
ββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββ
β FastAPI Backend (main.py) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β πΈ Image Analysis β π£οΈ Patient Assistant β
β (BLIP CPU inference) β (Gemini API) β
βββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββ€
β βοΈ Route Optimizer β π Telemetry Simulation β
β (QUBO/Ising TSP) β (Battery, Altitude, GPS) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- FastAPI: Modern async web framework for Python
- BLIP: Salesforce's image-to-text model for scene understanding
- Transformers: Hugging Face library for AI model inference
- Qiskit: Quantum computing framework for route optimization
- Google Gemini: LLM for conversational patient assistance
- Pillow: Image processing library
- Uvicorn: ASGI server for production deployment
- Image Upload β Drone camera captures emergency scene
- BLIP Analysis β AI generates caption ("person lying on ground")
- Triage Logic β Keywords mapped to severity score (1-9)
- Response β Frontend displays severity + recommended action
- Route Optimization β Quantum solver finds fastest multi-patient path
- Patient Communication β Gemini assistant provides medical guidance
| Variable | Required | Default | Description |
|---|---|---|---|
GOOGLE_API_KEY |
Yes | - | Google Gemini API key for patient assistant |
CUDA_VISIBLE_DEVICES |
No | "" | CUDA device configuration (disabled by default) |
Edit main.py startup section to customize:
uvicorn.run(
app,
host="0.0.0.0", # Listen on all interfaces
port=8000, # Port number
log_level="info" # Logging level
)For production, restrict CORS origins in main.py:
app.add_middleware(
CORSMiddleware,
allow_origins=["https://yourdomain.com"], # Specific domains
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)Symptoms:
β οΈ BLIP Model: NOT LOADED (Using SIMULATION fallback)
Solutions:
-
Check transformers installation:
pip install transformers --upgrade
-
Verify PyTorch installation:
pip install torch --index-url https://download.pytorch.org/whl/cpu
-
Check available disk space (need ~1GB for model download)
Error:
ERROR: [Errno 48] Address already in use
Solution:
# Find process using port 8000
lsof -ti:8000 | xargs kill -9 # macOS/Linux
netstat -ano | findstr :8000 # Windows
# Or use a different port
uvicorn main:app --port 8080Error:
ModuleNotFoundError: No module named 'fastapi'
Solution:
# Activate virtual environment first
.\.venv\Scripts\Activate.ps1 # Windows
source .venv/bin/activate # macOS/Linux
# Reinstall dependencies
pip install -r requirements.txtProblem: BLIP analysis takes >10 seconds per image
Solutions:
- Reduce image size before uploading (640x480 recommended)
- Use simulation mode for development (set
AI_MODE = "SIMULATION") - Upgrade RAM (8GB+ recommended for optimal performance)
Error:
google.api_core.exceptions.PermissionDenied: 403 API key not valid
Solution:
- Verify API key in
.envfile - Check API key is enabled at https://makersuite.google.com
- Ensure no extra spaces in
.envfile:GOOGLE_API_KEY=your_key_here
DroneModel/
βββ main.py # Main FastAPI backend
βββ patient_gemini_assistant.py # Patient voice assistant module
βββ quantum_route_optimizer.py # Route optimization module
βββ requirements.txt # Python dependencies
βββ .env # Environment variables (create this)
βββ src/ # React frontend source
β βββ CommandCenter.jsx # Main operator interface
β βββ Dashboard.jsx # Analytics dashboard
β βββ LandingPage.jsx # Landing page
βββ README.md # This file
# Backend with auto-reload
uvicorn main:app --reload --port 8000
# Frontend (in separate terminal)
npm run devUse the interactive API documentation at http://localhost:8000/docs to test all endpoints with a built-in interface.