Bottom-Up: Stereo Vision Algorithm

The stereo vision algorithm uses images from two cameras that are displaced horizontally from each other. This provides two different views of the scene from different vantage points, similar to human vision. The relative depth information from the scene can be obtained by comparing the two images to build a disparity map. The disparity map encodes the relative positions of objects in the horizontal coordinates such that the values are inversely proportional to the scene depth at the corresponding pixel location.

The bottom-up methodology starts with a fully optimized hardware design that is already synthesized using Vivado HLS and then integrate the pre-optimized hardware function with software in the SDSoC environment. This flow allows hardware designers already knowledgeable with HLS to build and optimize the entire hardware function using advanced HLS features first and then for software programmers to leverage this existing work.

The following section uses the stereo vision design example to take you through the steps of starting with an optimized hardware function in Vivado HLS and build an application that integrates the full system with hardware and software running on the board using the SDSoC environment. The following figure shows the final system to be realized and highlights the existing hardware function stereo_remap_bm to be incorporated into the SDSoC environment.



In the bottom-up flow, the general optimization methodology for the SDSoC environment, as detailed in this guide, is reversed. By definition, you would start with an optimized hardware function and then seek to incorporate it into the SDSoC environment and optimize the data transfers.