Abstract This paper briefly introduces DirectShow technology and studies the technology of video capture, compression and network transmission using DirectShow. The network transmission system of MPEG-4 video data is realized by the codec provided by the third party. In this system, the RTP protocol is used for video data transmission, and the remote frame rate is controlled at the same time.
Keyword video; acquisition; compression; DirectShow; MPEG-4, RTP
This article refers to the address: http://
1 Introduction In recent years, with the development of the national economy, there is an increasing demand for video surveillance systems in various sectors of society. However, many current monitoring systems are related to specific hardware, and must be supported by specific acquisition cards. Therefore, it is necessary to develop a versatile video surveillance system that can capture video with a common camera.
DirectShow-based development can flexibly control the effects of audio and video, so choosing DirectShow, a scalable technology, can be helpful for future application upgrades. In addition, in order to implement the strategy of streaming media transmission control, streaming media transmission and playback is also one of the problems to be solved. The DirectShow technology provided by Microsoft is based on component object model technology, which supports loose format changes and provides high quality multimedia stream playback. It can realize the client processing of streaming media in a common microcomputer, and can improve the versatility and scalability of the system.
For video data transmission, compression ratio is a factor that must be considered. MPEG-4 is an international standard for coding of moving pictures and sounds developed by the ISO and IEC MPEG groups. It offers a range of new or improved features in terms of content-based interactivity, compression ratios, and universal access capabilities. MPEG-4 video has a high compression ratio while providing better image quality, and is suitable as an image compression standard for transmission.
2 Related Technology
2.1 Introduction to DirectShow Technology DirectShow is the underlying application programming interface (API) developed by Microsoft for the development of high-performance multimedia applications. It is one of the core members of the DirectX family. DirectShow itself is an architecture that controls and processes multimedia data through a system built-in or programmer-developed filter. This architecture defines how to handle and control the flow of multimedia data inside and between filters. Each filter has an input or output pin (Pin), or both.
Filter is the basic component of DirectShow. It is the smallest function module in Filter Graph. DirectShow separates the processing of multimedia data into different steps. These different steps are processed by the corresponding Filter. This way we can match different filters together to meet our requirements to process multimedia data. Filters can be roughly divided into three categories based on their implementation capabilities:
(1) Source Filters. The source filter is responsible for getting the raw media data. Sources of these media data include local hard drives or media files on the network, various capture cards, and more.
(2) Transform Filters. The task of the conversion filter is to process the data received from other filters and pass it to the next filter after some processing. A codec is a typical conversion filter.
(3) Rendering Filters. The performance filter performs the final processing on the received data. It does the job of saving media data as files, sending data to the network, displaying video, playing back audio, etc. .
Above the DirectShow system is the application. The application should establish the corresponding Filter Graph according to the function to be implemented by the program, and then use the Filter Graph Manager to control the processing of the entire data. DirectShow can receive various events while the Filter Graph is running and send it to the application via message. This allows for interaction between the application and the DirectShow system.
2.2 Introduction to RTP/RTCP Protocol The Real Time Transport Protocol (RTP) is a transmission protocol for multimedia data streams on the Internet. In 1996, the AVT team of the IETF (Internet Engineering Task Force) released the AVT team as RFC1889. With continuous improvement, in July 2003, RFC3550 was replaced in place of RFC1889. RTP fully embodies the design idea of â€‹â€‹the application layer framing, a modern communication protocol, allowing its users to understand, adjust and even develop a packaging scheme for continuous media. This protocol is widely used for the transmission of real-time media such as VoIP and video. The RTP protocol includes two closely related sub-protocols: RTP and RTCP (RTP Control Protocol):
(1) RTP protocol - transmitting data with real-time characteristics;
(2) RTCP protocol - monitors QoS and transmits information about participating transmitters.
RTP (Real-Time Transport Protocol) usually works on the upper layer of UDP, receives multimedia information streams (such as MPEG-4 video) from the upper layer, assembles them into RTP packets, and then sends them to the lower layer UDP, which is equivalent to the OSI session layer, providing synchronization and synchronization. Sorting service. Therefore, the RTP protocol is suitable for transmitting continuous data, such as video and audio, and has certain adaptive ability to delay errors caused by the network. RTCP is a real-time control protocol for managing control information, such as monitoring network delay and bandwidth. Once the bandwidth of the transmitted multimedia information changes, the receiving end notifies the sender to broadcast the symbolized identification code and encoding parameters to achieve control. The purpose of transmission quality. In addition, RTP also supports the use of multicast to send data to multiple destination endpoints if the underlying network supports multicast.
The RTP protocol has the following characteristics :
(1) Flexibility The data packets and control messages of the RTP protocol use different ports, and the data stream and the control flow are separated, which greatly improves the flexibility of the protocol and is simple to handle.
(2) Support for multicast If the underlying network supports it, it can support multicast.
(3) Scalability The RTP protocol usually provides services for a specific application, implemented through a specific application process, and is not implemented as a separate layer in the OSI architecture. RTP only provides a protocol framework, and developers can use the application according to the application. The specific requirements for the agreement are fully extended.
3 Key technology implementation
The implementation of the system is as follows: Data is collected by a USB camera, and the collected data is encoded by MPEG-4 using Divx 5.1.1 Codec, and then sent to a Send Filter to transmit the encoded data. Its Filter Graph is shown in Figure 1:
Figure 1 Filter Graph at the sender
The implementation of the receiving end is as follows: the data sent by the transmitting end is received by a receiving Filter, and then the received data is decoded by the Divx Decoder Filter. Finally, the decoded data is played back by Video Renderer. Its Filter Graph is shown in Figure 2:
Figure 2 Filter Graph at the receiving end
3.1 Data acquisition and coding implementation
3.1.1 Acquisition of Filter Graph The Filter Graph of the acquisition application is generally more complicated, and it is sometimes difficult to build the Filter Graph directly using the IGraphBuilder interface on the Filter Graph Manager. To this end, DirectShow specifically provides a helper component, Capture Graph Builder, to simplify the creation of this Filter Graph.
The first is to create the Filter Graph Manager component, the core code is as follows:
3.1.2 Joining the Collection Filter
3.1.3 Adding MPEG-4 Encoder Filter
Here we use the open source code Filter provided by Divx. Divx's Encoder Filter and Decoder Filter are automatically installed after installing DivX.Pro.v5.1.1 (Note: Decoder Filter is used at the receiving end). Add Divx's Encoder Filter to the program. The idea is to enumerate the Filter item named "DivX Pro(tm) 5.1.1 Codec" in the Video Compressors directory and add it to the Filter Graph.
3.2 Transmission and reception of data
3.2.1 Data transmission Filter implementation
Data transmission should be developed to send a Filter. For the convenience of programming, it is implemented in the form of Filter in the program. Use the form of the class instead of writing a component with a suffix of ax to register and use. Here we define a class CFilterMpeg4Sender that inherits from CBaseFilter. This class must implement the following functions :
(1) Define the instance mInputPin of Pin on CFilterMpeg4Sender in the class.
(2) The implementation inherits from CBaseFilter::GetPin, which is used to return the object pointer of each Pin on the Filter.
(3) The implementation inherits from CBaseFilter::GetPin and is used to return the number of Pins on the Filter.
Define a class CMpeg4InputPin that inherits from CRenderedInputPin, which is used to implement the input pin on CFilterMpeg4Sender. The sending Filter receives the data encoded by the Filter through the input pin, and then sends it according to certain rules.
This class must implement the following functions :
(1) Override the method EndOfStream.
(2) Implement two functions: IPin::BeginFlush and IPin::EndFlush.
(3) Rewrite the method CBasePin::CheckMediaType to check the media type when connecting.
(4) Rewrite the method CBasePin:: Receive(), receive the Sample and send it
3.2.2 Data Reception The implementation of Filter data is actually to write a Source Filter. The name of this Source Filter is CFilterMpeg4Receiver, which is also inherited from CBaseFilter. This is somewhat similar to the implementation of sending a Filter. One thing to note is the setting of the MediaType of the Filter output.
Char MediaType=//Media data type, obtained by writing the media type to a file on the sending end and then setting the MediaType of the output data by the statement: CFilterMpeg4Receiver::SetupMediaType((char *)MediaType, 88).
CFilterMpeg4Receiver::SetupMediaType then calls CMpeg4OutPin::SetupMediaType() to set the format of the received media data.
3.2.3 Implementation of network transmission of data
Data transmission We use the RTP protocol stack provided by the open source code JRTPLIB . The latest JRTPLIB encapsulates the implementation of RFC3550, developers can develop high-quality audio and video transmission programs as long as they understand the RTP protocol. When using JRTPLIB, you only need to inherit the RTPSession class, and then re-send the following functions to achieve video data reception.
In the case of low network bandwidth (such as a dozen KBps), data frame dropping is more serious, which has a great impact on image quality. We use the method of splitting the frame (decomposed into 1400 bytes) and then sending it to reduce the frame loss rate. After the receiving end receives the data, the data belonging to the same video frame is further grouped.
The network sending and receiving program flow chart is shown in Figure 3:
Figure 3 Network sending and receiving program flow chart
The description of the program flow chart is as follows:
(1) The algorithm for sending frames at the sending end is as follows:
The data belonging to the same video frame is then sent to the decoding Filter.
After testing (under CDMA1.X network), the video frame transmission method is lower than the direct transmission packet loss rate, and the transmission quality is greatly improved.
3.3 Data decoding and playback implementation Decoding Filter uses the open source decoder provided by Divx. After receiving the Filter, the decoding filter can be connected. Finally, the Receiver Filter can be used to play back the received data.
3.4 Realizing the frame rate control function By adding a frame rate control filter between the acquisition device and the code Filter (DivX Pro(tm) 5.1.1 Codec) to achieve the frame rate control, the Filter is equivalent to a video frame number counter, each When a frame is received, the frame is not immediately sent to the downstream coded Filter, but the value of the counter is incremented by 1. When the value of the counter reaches the maximum value, the currently received frame is sent out. Sending a control frame rate command to the receiving end at the receiving end can conveniently implement remote control of the frame rate.
The program fragment is as follows:
The Filter Graph with the frame rate control Filter is added as shown in Figure 4:
Figure 4 Filter Graph for frame rate control
The system uses DirectShow technology to realize the transmission of MPEG-4 video data, and the video data transmission uses the RTP protocol. Moreover, the remote frame rate is controlled, and the system can be easily transplanted into the image transmission system of the future 3G network. Research on codecs and implementation of codec using H.264 technology is the next step to be done. Of course, in terms of transmission quality (QoS), it is necessary to conduct in-depth research.
Plastic LED Bulbs,LED Bulb Light,Plastic LED Light
Chandelier & Pendant Light,Basic LED Tube Light Co., Ltd. , http://www.chledtube.com