The performance of the MSDK V4 decoder has been verified worse than the DJI Pilot. It has lots of defects:
- Usually occur latency, mosaic, and packet loss.
- You cannot render the SurfaceView and output YUV data simultaneously.
- You cannot receive a video stream in the background.
- You have to assign the video source whenever you switch on the aircraft.
- The video stream received from the getPrimaryVideoFeed interface is DJI type H.264, you also need to send it to DJICodecManager to reassemble the frames. Then you can receive the standard H.264 video stream from the providedTranscodedVideoFeed interface.
The video stream and decoder structure are refactored, its features are:
- There are 3 VideoChannelType which are self-adaptive in terms of bandwidth. Normally, the main camera is assigned to the PRIMARY_STREAM_CHANNEL, the FPV, and PSDK cameras use the other 2 channels.
PRIMARY_STREAM_CHANNEL The primary stream channel has the highest priority in the stream channel. It will be fulfilled firstly when the low bandwidth situation is happening.
The secondary stream channel has the second priority in the stream channel.
The extended stream channel has the lowest priority in the stream channel. It may lose packets when the stream channel bandwidth is low.
- It is very easy to use. You only need to find the available stream sources and video channels, then bind them together. A flow chart diagram is drawn below.
- When you use the IVideoDecoder to decode, you need to call its constructor. The VideoDecoder has provided 7 constructors. If you have a demand to live stream, you need to call this constructor.
* 1. If the developer wants to customize the IVideoDecoder from its constructor, you need to make sure the width and height hav a common multiple of 16.
* 2. We suggest you use outputSurface as the SurfaceView.
* 3. If you want to render at outputSurface and receive YUV data simultaneously, you need to create 2 VideoDecoders. Their VideoChannelTypes are the same, you only need to set their DecoderOutputMode as SURFACE_MODE and YUV_MODE.
* 4. The isForLiveStream determines whether the LiveStreamManager is gathering video stream from VideoDecoder and video stream.
public VideoDecoder(Context context, VideoChannelType channelType, DecoderOutputMode outputMode, Object outputSurface, int width, int height, boolean isForLiveStream)
- You can call addYuvDataListener to get the YUV data, its data type is YUV420Planar.
- You can call addStreamDataListener to get the standard H.264 video stream data.
- After getting the standard H.264 video stream data, you can import them to the AI library or develop your own live stream feature.
- You can receive 2 cameras' YUV data and render 2 Surfaces at the same time.
- The YUV data can be saved into JPEG standard. It can be saved as a screenshot when a callback is returned.