ZHOU Lijun, LIU Yu, BAI Lu, LIU Fei, WANG Yawei. Using TensorRT for deep learning and inference applications[J]. Journal of Applied Optics, 2020, 41(2): 337-341. DOI: 10.5768/JAO202041.0202007
Citation: ZHOU Lijun, LIU Yu, BAI Lu, LIU Fei, WANG Yawei. Using TensorRT for deep learning and inference applications[J]. Journal of Applied Optics, 2020, 41(2): 337-341. DOI: 10.5768/JAO202041.0202007

Using TensorRT for deep learning and inference applications

  • TensorRT is a high-performance deep learning and inference platform. It includes a deep learning and inference optimizer as well as runtime that provides low latency and high throughput for deep learning and inference applications. An example of using TensorRT to quickly build computational pipelines to implement a typical application for performing intelligent video analysis with TensorRT was presented. This example demonstrated four concurrent video streams that used an on-chip decoder for decoding, on-chip scalar for video scaling, and GPU computing. For simplicity of presentation, only one channel used NVIDIA TensorRT to perform object identification and generate bounding boxes around the identified objects. This example also used video converter functions for various format conversions, EGLImage to demonstrate buffer sharing and image display. Finally, the GPU card V100 was used to test the TensorRT acceleration performance of ResNet network. The results show that TensorRT can improve the throughput by about 15 times.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return