Quick Start
Get started with DeepStream by running your first video analytics application.
Prerequisites
- DeepStream SDK installed (see Installation)
- NVIDIA GPU with appropriate drivers
- Sample video files or RTSP stream
Running Sample Applications
DeepStream includes several pre-built sample applications to help you get started.
Sample 1: Basic Object Detection
Run the default sample application with 4 video streams:
cd /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app
deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
This will:
- Decode 4 video streams simultaneously
- Perform object detection using ResNet
- Track detected objects
- Display results in a tiled view
Controls:
qorCtrl+C: Quit applicationp: Pause/Resume- Mouse: Navigate tiled view
Sample 2: Single Stream with Custom Input
Use your own video file:
# Navigate to samples directory
cd /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app
# Copy and modify config
cp source1_usb_dec_infer_resnet_int8.txt custom_config.txt
# Edit the config file to point to your video
nano custom_config.txt
Modify the [source0] section:
[source0]
enable=1
type=3 # 3=URI, 2=USB camera, 1=CSI camera
uri=file:///path/to/your/video.mp4
num-sources=1
gpu-id=0
Run with custom config:
deepstream-app -c custom_config.txt
Sample 3: RTSP Stream
Process an RTSP camera stream:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
[source0]
enable=1
type=4 # RTSP
uri=rtsp://username:password@camera_ip:port/stream
num-sources=1
gpu-id=0
cudadec-memtype=0
[sink0]
enable=1
type=2 # EGL sink (display)
sync=0
source-id=0
[primary-gie]
enable=1
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
batch-size=1
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
interval=0
gie-unique-id=1
config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt
Run with RTSP config:
deepstream-app -c rtsp_config.txt
Your First Python Application
Create a simple DeepStream Python application to detect objects in a video.
Step 1: Create Python Script
#!/usr/bin/env python3
import sys
sys.path.append('/opt/nvidia/deepstream/deepstream/lib')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
import pyds
def bus_call(bus, message, loop):
"""Callback for GStreamer bus messages"""
t = message.type
if t == Gst.MessageType.EOS:
print("End-of-stream")
loop.quit()
elif t == Gst.MessageType.ERROR:
err, debug = message.parse_error()
print(f"Error: {err}: {debug}")
loop.quit()
return True
def osd_sink_pad_buffer_probe(pad, info, u_data):
"""Probe function to access metadata"""
frame_number = 0
num_rects = 0
gst_buffer = info.get_buffer()
if not gst_buffer:
return Gst.PadProbeReturn.OK
# Retrieve batch metadata from buffer
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
frame_number = frame_meta.frame_num
num_rects = frame_meta.num_obj_meta
l_obj = frame_meta.obj_meta_list
while l_obj is not None:
try:
obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
# Print object info
print(f"Frame {frame_number}: {obj_meta.obj_label} "
f"[{obj_meta.confidence:.2f}] "
f"bbox: ({obj_meta.rect_params.left:.0f}, "
f"{obj_meta.rect_params.top:.0f}, "
f"{obj_meta.rect_params.width:.0f}, "
f"{obj_meta.rect_params.height:.0f})")
try:
l_obj = l_obj.next
except StopIteration:
break
try:
l_frame = l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
def main(args):
# Check input arguments
if len(args) < 2:
print("Usage: python3 first_deepstream_app.py <video_file>")
return
# Initialize GStreamer
Gst.init(None)
# Create Pipeline
print("Creating Pipeline")
pipeline = Gst.Pipeline()
if not pipeline:
print("Unable to create Pipeline")
return
# Create elements
print("Creating Source")
source = Gst.ElementFactory.make("filesrc", "file-source")
print("Creating H264Parser")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
print("Creating Decoder")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
print("Creating Streammux")
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
print("Creating Pgie")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
print("Creating nvvidconv")
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
print("Creating nvosd")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
print("Creating EGLSink")
sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
if not source or not h264parser or not decoder or not streammux or not pgie \
or not nvvidconv or not nvosd or not sink:
print("Unable to create elements")
return
# Set element properties
source.set_property('location', args[1])
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', 1)
streammux.set_property('batched-push-timeout', 4000000)
pgie.set_property('config-file-path',
'/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt')
# Add elements to pipeline
print("Adding elements to Pipeline")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(sink)
# Link elements
print("Linking elements in the Pipeline")
source.link(h264parser)
h264parser.link(decoder)
sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
print("Unable to get the sink pad of streammux")
return
srcpad = decoder.get_static_pad("src")
if not srcpad:
print("Unable to get source pad of decoder")
return
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(sink)
# Create event loop
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
# Add probe to get metadata
osdsinkpad = nvosd.get_static_pad("sink")
if not osdsinkpad:
print("Unable to get sink pad of nvosd")
else:
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
# Start play back and listen to events
print("Starting pipeline")
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass
# Cleanup
pipeline.set_state(Gst.State.NULL)
print("Pipeline stopped")
if __name__ == '__main__':
sys.exit(main(sys.argv))
Step 2: Run the Application
# Make script executable
chmod +x first_deepstream_app.py
# Run with a sample video
python3 first_deepstream_app.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Understanding the Pipeline
The basic DeepStream pipeline consists of:
[Input] → [Decode] → [Mux] → [Inference] → [OSD] → [Display/Save]
Key Components:
- filesrc: Reads video file from disk
- h264parse: Parses H.264 video stream
- nvv4l2decoder: Hardware-accelerated video decoding
- nvstreammux: Batches frames from multiple sources
- nvinfer: Runs AI inference using TensorRT
- nvvideoconvert: Color space conversion
- nvdsosd: On-screen display for bounding boxes and text
- nveglglessink: Display output on screen
Performance Monitoring
Enable performance metrics to monitor your application:
# In config file, add:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
This will display:
- FPS (Frames Per Second)
- Average inference time
- GPU utilization
- Memory usage
Using Pre-trained Models
DeepStream includes several pre-trained models:
# List available models
ls /opt/nvidia/deepstream/deepstream/samples/models/
# Common models:
# - Primary_Detector: Object detection
# - Secondary_CarColor: Car color classification
# - Secondary_CarMake: Car make classification
# - Secondary_VehicleTypes: Vehicle type classification
Sample Data
DeepStream provides sample videos for testing:
cd /opt/nvidia/deepstream/deepstream/samples/streams/
ls
# Available samples:
# - sample_720p.h264
# - sample_1080p_h264.mp4
# - sample_qHD.mp4
Next Steps
- Basic Usage: Deep dive into pipeline architecture
- Python Bindings: Learn Python API in detail
- Model Deployment: Use custom AI models
- Best Practices: Optimize performance
Troubleshooting
Issue: Black screen or no display
# Check if X server is running
echo $DISPLAY
# If empty, set display
export DISPLAY=:0
Issue: Cannot find GStreamer plugins
# Set plugin path
export GST_PLUGIN_PATH=/opt/nvidia/deepstream/deepstream/lib/gst-plugins:$GST_PLUGIN_PATH
# Verify plugins are loaded
gst-inspect-1.0 nvinfer
Issue: Python import errors
# Ensure Python path is set
export PYTHONPATH=/opt/nvidia/deepstream/deepstream/lib:$PYTHONPATH
# Verify pyds installation
python3 -c "import pyds; print(pyds.__version__)"