Harpoon 1.1 Integration Documentation

Complete technical guide for integrating AI drone detection into existing defense systems

v1.1.0 Last updated: June 2025

Overview

Harpoon 1.1 is a production-ready AI model for real-time drone detection that integrates seamlessly into existing surveillance infrastructure without requiring system replacement.

🎯 96.6% Accuracy

State-of-the-art detection performance with 93.8% recall rate

⚡ 83ms Inference

Real-time processing at 12.2 FPS with multi-drone tracking

🔧 Drop-in Integration

Add to existing camera systems with minimal code changes

🚀 Multiple Formats

ONNX, PyTorch, TensorRT support for any platform

Simple Integration Example

# Add AI drone detection to your existing system
from harpoon import DroneDetector

# Initialize detector with your preferred model format
detector = DroneDetector("harpoon_v1.1.onnx")

# Process camera frames (your existing video pipeline)
while True:
    frame = camera.get_frame()  # Your existing camera code
    
    # Add drone detection with one line
    detections = detector.detect(frame, confidence=0.5)
    
    # Integrate results into your system
    for detection in detections:
        alert_system.send_threat_alert(detection)
        tracking_system.update_targets(detection)

System Requirements

Minimum Requirements

  • CPU: Intel Core i5 / AMD Ryzen 5 (4+ cores)
  • RAM: 8GB DDR4
  • Storage: 1GB available space
  • OS: Linux, Windows 10+, macOS 10.15+
  • Python: 3.8+ (for Python integration)

Recommended (GPU Accelerated)

  • GPU: NVIDIA RTX 3060+ / Tesla T4+
  • VRAM: 4GB+ dedicated
  • CUDA: 11.8+ / TensorRT 8.5+
  • RAM: 16GB+ DDR4
  • Network: Gigabit Ethernet (for streaming)

Performance by Hardware

Hardware Inference Time Throughput Use Case
CPU Only (i7-10700K) ~80ms 12 FPS Edge deployment, low-power
RTX 3060 ~25ms 40 FPS Single camera real-time
RTX 3080 ~18ms 55 FPS Multi-camera deployment
Tesla V100 ~12ms 80+ FPS Enterprise server deployment

Installation

🐍 Python Package (Recommended)

# Install via pip
pip install harpoon-ai

# Or install with GPU support
pip install harpoon-ai[gpu]

# Verify installation
python -c "import harpoon; print(harpoon.__version__)"

📦 Direct Model Download

# Download models directly
wget https://releases.chiliadresearch.com/harpoon/v1.1/models.zip
unzip models.zip

# Available formats:
# - harpoon_v1.1.onnx (43MB) - Cross-platform
# - harpoon_v1.1.pt (21MB) - PyTorch native
# - harpoon_v1.1.trt (optimized) - NVIDIA TensorRT

🔧 C++ SDK

# Download C++ SDK
wget https://releases.chiliadresearch.com/harpoon/v1.1/cpp-sdk.tar.gz
tar -xzf cpp-sdk.tar.gz

# Build and install
cd harpoon-cpp-sdk
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
sudo make install

Installation Verification

# Test your installation
from harpoon import DroneDetector
import numpy as np

# Create test detector
detector = DroneDetector("harpoon_v1.1.onnx")

# Test with dummy image
test_image = np.random.randint(0, 255, (640, 640, 3), dtype=np.uint8)
detections = detector.detect(test_image)

print(f"✅ Installation successful!")
print(f"Model loaded: {detector.model_info}")
print(f"Inference device: {detector.device}")

🐳 Docker Deployment

Production-ready Docker containers for instant deployment across any infrastructure.

🎯 Complete System (Webcam + GUI)

Full real-time drone detection with green bounding boxes and webcam support.

# Pull and run complete system
docker run -it --rm \
  --device=/dev/video0 \
  --privileged \
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
  -v /dev/shm:/dev/shm \
  christiankhoury05/harpoon-1.1-final:latest

# Performance: 12.2 FPS, 83ms inference, multi-drone tracking
FPS: 12.2
Inference: 83ms
Multi-drone: Up to 4 simultaneous
Accuracy: 96.6%

🧠 Model-Only API

Lightweight API server for integration into existing systems.

# Pull and run API server
docker run -d -p 8080:8080 \
  christiankhoury05/harpoon-model-api:1.1

# Test the API
curl -X POST -F "file=@drone_image.jpg" \
  http://localhost:8080/detect

# Health check
curl http://localhost:8080/health
POST /detect - Upload image for detection
GET /health - API health status
GET /models - Available model information

⚙️ Custom Integration

Use Docker containers as base for your custom implementations.

# Dockerfile for custom integration
FROM christiankhoury05/harpoon-model-api:1.1

# Add your custom code
COPY your_integration.py /app/
COPY your_config.yaml /app/config/

# Install additional dependencies
RUN pip install your-requirements

# Custom entrypoint
CMD ["python", "your_integration.py"]

Production Deployment Guide

1. System Requirements
  • Docker Engine 20.10+
  • GPU support: NVIDIA Docker runtime (optional)
  • Webcam access: --device=/dev/video0
  • GUI display: X11 forwarding setup
2. Platform Compatibility
  • ✅ Linux: Native support with X11
  • ⚠️ macOS: Requires XQuartz for GUI
  • ⚠️ Windows: Requires X11 server (VcXsrv)
  • ✅ Cloud: API-only mode works everywhere
3. Security Considerations
  • Use --privileged flag only for webcam access
  • API containers don't require privileged mode
  • Consider network policies for production
  • Monitor resource usage and set limits

Quick Start Guide

Get drone detection running in your system in under 10 minutes.

1

Initialize Detector

from harpoon import DroneDetector

# Choose your model format and device
detector = DroneDetector(
    model_path="harpoon_v1.1.onnx",
    device="cuda",  # or "cpu"
    confidence_threshold=0.5
)
2

Process Single Image

import cv2

# Load image from your camera system
image = cv2.imread("camera_frame.jpg")

# Detect drones
detections = detector.detect(image)

# Process results
for detection in detections:
    print(f"Drone detected at {detection.bbox}")
    print(f"Confidence: {detection.confidence:.2f}")
    print(f"Class: {detection.class_name}")
3

Real-time Video Processing

# Your existing camera code
cap = cv2.VideoCapture(0)  # or your IP camera URL

while True:
    ret, frame = cap.read()
    if not ret:
        break
    
    # Add drone detection
    detections = detector.detect(frame)
    
    # Draw bounding boxes
    for det in detections:
        x1, y1, x2, y2 = det.bbox
        cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2)
        cv2.putText(frame, f"Drone {det.confidence:.2f}", 
                   (x1, y1-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
    
    cv2.imshow('Drone Detection', frame)

Integration Tips

🔧 Existing Systems

Replace your current object detection call with detector.detect()

⚡ Performance

Batch multiple frames for higher throughput on GPU

🎯 Accuracy

Adjust confidence threshold based on your false positive tolerance

📊 Monitoring

Use built-in performance metrics for system monitoring