日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

onvif 开发之video streamer---onvif实现功能和经验

發布時間:2024/4/14 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 onvif 开发之video streamer---onvif实现功能和经验 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

目錄(?)[-]

  • 一產生onvif源碼框架
  • 從wsdl生成C頭文件
  • 從頭文件生成源碼框架
  • 二創建soap運行環境
  • 三RTSP視頻對接
  • 實現GetCapabilities命令
  • 實現GetServices命令
  • 實現GetVideoSources命令
  • 實現GetProfiles命令
  • GetVideoSourceConfiguration和GetVideoEncoderConfiguration
  • GetVideoEncoderConfigurationOptions
  • 四運行live555MediaServer服務器
  • 五啟動Onvif Device Manager測試
  • Live video
  • Video streaming
  • Profiles
  • 最后是運行的live555 rtsp服務器
  • 有了前幾篇的基礎,現在可以正式開始onvif的實現工作,其中一項非常重要的部分就是視頻流的對接,即能夠在符合onvif標準的監控客戶端軟件里接收到設備端NVT發來的RTSP視頻流。這里,我所用的客戶端軟件是Onvif Device Manager v2.2。【來自http://blog.csdn.net/ghostyu】

    ONVIF?Profile S Specification文檔描述了Device或者說DVT和Client可以使用的一種Profile,Profile這個詞在計算機領域非常常見,我們可以理解成一種方案、配置、框架等。

    文檔里描述了如果實現VideoStream,device和client應該具備的條件,當然如果實現文檔的所有條件,就可以說該設備符合Profile S

    如果單純實現VideoStream,只需完成下列命令。

    ?

    1、GetProfiles
    2、GetStreamUri
    填充rtsp路徑,例如:rtsp://192.168.1.201/petrov.m4e
    3、Media Streaming using RTSP
    這里使用開源的live555,完成rtsp功能
    4、GetVideoEncoderConfiguration
    5、GetVideoEncoderConfigurationOptions
    6、GetCapabilities
    NVC為了獲取DVT所支持的功能的命令

    參考文檔:

    1、ONVIF Profile S Specification
    描述ProfileS是什么樣的一個東西,如何實現
    2、Reference_of_ONVIF_Development_v1.01.02
    Onvif DVT設計參考,指明了一條道路,但沒有具體內容
    3、ONVIF-Media-Service-Spec-v220
    Onvif Media的說明介紹
    4、http://www.onvif.org/onvif/ver20/util/operationIndex.html
    onvif幾乎全部命令的詳細說明,非常重要。該文檔告訴我們結構體成員的意義和如何填充。Onvif開發其實就是各種結構體的填充。

    ?

    一、產生onvif源碼框架

    1、從wsdl生成C頭文件

    wsdl2h -o onvif.h -c -s -t .\typemap.dat http://www.onvif.org/onvif/ver10/device/wsdl/devicemgmt.wsdl http://www.onvif.org/onvif/ver10/event/wsdl/event.wsdl http://www.onvif.org/onvif/ver10/display.wsdl http://www.onvif.org/onvif/ver10/deviceio.wsdl http://www.onvif.org/onvif/ver20/imaging/wsdl/imaging.wsdl http://www.onvif.org/onvif/ver10/media/wsdl/media.wsdl http://www.onvif.org/onvif/ver20/ptz/wsdl/ptz.wsdl http://www.onvif.org/onvif/ver10/receiver.wsdl http://www.onvif.org/onvif/ver10/recording.wsdl http://www.onvif.org/onvif/ver10/search.wsdl http://www.onvif.org/onvif/ver10/network/wsdl/remotediscovery.wsdl http://www.onvif.org/onvif/ver10/replay.wsdl http://www.onvif.org/onvif/ver20/analytics/wsdl/analytics.wsdl http://www.onvif.org/onvif/ver10/analyticsdevice.wsdl http://www.onvif.org/onvif/ver10/schema/onvif.xsd http://www.onvif.org/ver10/actionengine.wsdl

    跟前一篇discovery唯一不同的是,這里多了很多wsdl文件,這次創建完整的onvif代碼框架

    2、從頭文件生成源碼框架

    soapcpp2 -c onvif.h -x -I /root/onvif/gsoap-2.8/gsoap/import -I /root/onvif/gsoap-2.8/gsoap/

    產生的C文件比較龐大,最大的有十幾兆,大部分的內容沒有復用導致。

    二、創建soap運行環境

    int main(int argc, char **argv)
    {
    int m, s;
    struct soap add_soap;
    int server_udp;

    server_udp = create_server_socket_udp();
    //bind_server_udp1(server_udp);
    pthread_t thrHello;
    pthread_t thrProbe;
    //pthread_create(&thrHello,NULL,main_Hello,server_udp);
    //sleep(2);
    pthread_create(&thrProbe,NULL,main_Probe,server_udp);

    soap_init(&add_soap);
    soap_set_namespaces(&add_soap, namespaces);


    if (argc < 0) {
    printf("usage: %s <server_port> \n", argv[0]);
    exit(1);
    } else {
    m = soap_bind(&add_soap, NULL, 80, 100);
    if (m < 0) {
    soap_print_fault(&add_soap, stderr);
    exit(-1);
    }
    fprintf(stderr, "Socket connection successful: master socket = %d\n", m);
    for (;;) {
    s = soap_accept(&add_soap);
    if (s < 0) {
    soap_print_fault(&add_soap, stderr);
    exit(-1);
    }
    fprintf(stderr, "Socket connection successful: slave socket = %d\n", s);
    soap_serve(&add_soap);
    soap_end(&add_soap);
    }
    }
    return 0;
    }

    注意,這里綁定了80端口,onvif使用的是http請求,然后附帶xml,其實正常的是將onvif集成到web服務器中,普通的http請求有web服務器處理,onvif的http請求則有soap處理。我們這里的做法也可行,只不過onvif的訪問web服務器的功能是無法使用的。

    三、RTSP視頻對接

    1、實現GetCapabilities命令

    客戶端發送GetCapabilities命令來得到設備端的能力,然后依據GetCapabilities返回的結果再來進行下一步操作

    在__tds__GetCapabilities函數中我們只需要填充Media部分和一些必要的即可

    //想要對接RTSP視頻,必須設置Media
    tds__GetCapabilitiesResponse->Capabilities->Media = (struct tt__MediaCapabilities*)soap_malloc(soap, sizeof(struct tt__MediaCapabilities));
    tds__GetCapabilitiesResponse->Capabilities->Media->XAddr = (char *) soap_malloc(soap, sizeof(char) * LARGE_INFO_LENGTH);
    strcpy(tds__GetCapabilitiesResponse->Capabilities->Media->XAddr, _IPv4Address);
    tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities = (struct tt__RealTimeStreamingCapabilities*)soap_malloc(soap, sizeof(struct tt__RealTimeStreamingCapabilities));
    tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTPMulticast = (int *)soap_malloc(soap, sizeof(int));
    *tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTPMulticast = _false;
    tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORETCP = (int *)soap_malloc(soap, sizeof(int));
    *tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORETCP = _true;
    tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORERTSP_USCORETCP = (int *)soap_malloc(soap, sizeof(int));
    *tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORERTSP_USCORETCP = _true;
    tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->Extension = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Media->Extension = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Media->__size = 0;
    tds__GetCapabilitiesResponse->Capabilities->Media->__any = 0;

    另外必要填充的還有

    //下面的重要,這里只實現視頻流,需要設置VideoSources
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->VideoSources = TRUE;
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->VideoOutputs = FALSE;
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->AudioSources = FALSE;
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->AudioOutputs = FALSE;
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->RelayOutputs = FALSE;
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->__size = 0;
    tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->__any = NULL;

    tds__GetCapabilitiesResponse->Capabilities->Extension->Display = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->Recording = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->Search = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->Replay = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->Receiver = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->AnalyticsDevice = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->Extensions = NULL;
    tds__GetCapabilitiesResponse->Capabilities->Extension->__size = 0;
    tds__GetCapabilitiesResponse->Capabilities->Extension->__any = NULL;

    2、實現GetServices命令

    int __tds__GetServices(struct soap* soap, struct _tds__GetServices *tds__GetServices, struct _tds__GetServicesResponse *tds__GetServicesResponse)
    {
    DBG("__tds__GetServices\n");
    /*該函數很必要*/
    char _IPAddr[INFO_LENGTH];
    int i = 0;
    sprintf(_IPAddr, "http://%03d.%03d.%03d.%03d/onvif/services", 192, 168, 1, 233);
    tds__GetServicesResponse->__sizeService = 1;

    tds__GetServicesResponse->Service = (struct tds__Service *)soap_malloc(soap, sizeof(struct tds__Service));
    tds__GetServicesResponse->Service[0].XAddr = (char *)soap_malloc(soap, sizeof(char) * INFO_LENGTH);
    tds__GetServicesResponse->Service[0].Namespace = (char *)soap_malloc(soap, sizeof(char) * INFO_LENGTH);
    strcpy(tds__GetServicesResponse->Service[0].Namespace, "http://www.onvif.org/ver10/events/wsdl");
    strcpy(tds__GetServicesResponse[0].Service->XAddr, _IPAddr);
    tds__GetServicesResponse->Service[0].Capabilities = NULL;
    tds__GetServicesResponse->Service[0].Version = (struct tt__OnvifVersion *)soap_malloc(soap, sizeof(struct tt__OnvifVersion));
    tds__GetServicesResponse->Service[0].Version->Major = 0;
    tds__GetServicesResponse->Service[0].Version->Minor = 3;
    tds__GetServicesResponse->Service[0].__any = (char **)soap_malloc(soap, sizeof(char *));
    tds__GetServicesResponse->Service[0].__any[0] = (char *)soap_malloc(soap, sizeof(char) * INFO_LENGTH);
    strcpy(tds__GetServicesResponse->Service[0].__any[0],"why1");
    tds__GetServicesResponse->Service[0].__any[1] = (char *)soap_malloc(soap,sizeof(char) * INFO_LENGTH);
    strcpy(tds__GetServicesResponse->Service[0].__any[1],"why2");
    tds__GetServicesResponse->Service[0].__size = NULL;
    tds__GetServicesResponse->Service[0].__anyAttribute = NULL;
    return SOAP_OK;
    }

    3、實現GetVideoSources命令

    int __tmd__GetVideoSources(struct soap* soap, struct _trt__GetVideoSources *trt__GetVideoSources, struct _trt__GetVideoSourcesResponse *trt__GetVideoSourcesResponse)
    {
    DBG("__tmd__GetVideoSources\n");

    int size1;
    size1 = 1;
    trt__GetVideoSourcesResponse->__sizeVideoSources = size1;
    trt__GetVideoSourcesResponse->VideoSources = (struct tt__VideoSource *)soap_malloc(soap, sizeof(struct tt__VideoSource) * size1);
    trt__GetVideoSourcesResponse->VideoSources[0].Framerate = 30;
    trt__GetVideoSourcesResponse->VideoSources[0].Resolution = (struct tt__VideoResolution *)soap_malloc(soap, sizeof(struct tt__VideoResolution));
    trt__GetVideoSourcesResponse->VideoSources[0].Resolution->Height = 720;
    trt__GetVideoSourcesResponse->VideoSources[0].Resolution->Width = 1280;
    trt__GetVideoSourcesResponse->VideoSources[0].token = (char *)soap_malloc(soap, sizeof(char)*INFO_LENGTH);
    strcpy(trt__GetVideoSourcesResponse->VideoSources[0].token,"GhostyuSource_token"); //注意這里需要和GetProfile中的sourcetoken相同

    trt__GetVideoSourcesResponse->VideoSources[0].Imaging =(struct tt__ImagingSettings*)soap_malloc(soap, sizeof(struct tt__ImagingSettings));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Brightness = (float*)soap_malloc(soap,sizeof(float));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Brightness[0] = 128;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->ColorSaturation = (float*)soap_malloc(soap,sizeof(float));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->ColorSaturation[0] = 128;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Contrast = (float*)soap_malloc(soap,sizeof(float));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Contrast[0] = 128;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->IrCutFilter = (int *)soap_malloc(soap,sizeof(int));
    *trt__GetVideoSourcesResponse->VideoSources[0].Imaging->IrCutFilter = 0;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Sharpness = (float*)soap_malloc(soap,sizeof(float));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Sharpness[0] = 128;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->BacklightCompensation = (struct tt__BacklightCompensation*)soap_malloc(soap, sizeof(struct tt__BacklightCompensation));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->BacklightCompensation->Mode = 0;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->BacklightCompensation->Level = 20;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Exposure = NULL;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Focus = NULL;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WideDynamicRange = (struct tt__WideDynamicRange*)soap_malloc(soap, sizeof(struct tt__WideDynamicRange));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WideDynamicRange->Mode = 0;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WideDynamicRange->Level = 20;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance = (struct tt__WhiteBalance*)soap_malloc(soap, sizeof(struct tt__WhiteBalance));
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance->Mode = 0;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance->CrGain = 0;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance->CbGain = 0;
    trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Extension = NULL;
    trt__GetVideoSourcesResponse->VideoSources[0].Extension = NULL;
    return SOAP_OK;
    }

    __tmd__GetVideoSources最重要的是token的填充,必須要和下面profile中的sourcetoken相同,需要匹配到這個視頻源

    4、實現GetProfiles命令

    size = 1;
    trt__GetProfilesResponse->Profiles =(struct tt__Profile *)soap_malloc(soap, sizeof(struct tt__Profile) * size);
    trt__GetProfilesResponse->__sizeProfiles = size;

    i=0;
    trt__GetProfilesResponse->Profiles[i].Name = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    strcpy(trt__GetProfilesResponse->Profiles[i].Name,"my_profile");
    trt__GetProfilesResponse->Profiles[i].token= (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    strcpy(trt__GetProfilesResponse->Profiles[i].token,"token_profile");
    trt__GetProfilesResponse->Profiles[i].fixed = _false;
    trt__GetProfilesResponse->Profiles[i].__anyAttribute = NULL;

    除了上面的基本信息,還需要填充兩大項:VideoSourceConfiguration和VideoEncoderConfiguration,一個用于描述視頻源的信息,另外一個描述視頻的編碼信息

    ?

    先給VideoSourceConfiguration分配空間

    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration = (struct tt__VideoSourceConfiguration *)soap_malloc(soap,sizeof(struct tt__VideoSourceConfiguration ));
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Name = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->token = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->SourceToken = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds = (struct tt__IntRectangle *)soap_malloc(soap,sizeof(struct tt__IntRectangle));

    ?

    然后在填充它

    /*注意SourceToken*/
    strcpy(trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Name,"VS_Name");
    strcpy(trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->token,"VS_Token");
    strcpy(trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->SourceToken,"GhostyuSource_token"); /*必須與__tmd__GetVideoSources中的token相同*/
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->UseCount = 1;
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->x = 1;
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->y = 1;
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->height = 720;
    trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->width = 1280;

    如果是指針必須先用soap_malloc分配內存,然后才能賦值

    ?

    下面是VideoEncoderConfiguration

    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration = (struct tt__VideoEncoderConfiguration *)soap_malloc(soap,sizeof(struct tt__VideoEncoderConfiguration));
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Name = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->token= (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
    strcpy(trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Name,"VE_Name1");
    strcpy(trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->token,"VE_token1");
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->UseCount = 1;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Quality = 10;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Encoding = 1;//JPEG = 0, MPEG4 = 1, H264 = 2;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Resolution = (struct tt__VideoResolution *)soap_malloc(soap, sizeof(struct tt__VideoResolution));
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Resolution->Height = 720;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Resolution->Width = 1280;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl = (struct tt__VideoRateControl *)soap_malloc(soap, sizeof(struct tt__VideoRateControl));
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl->FrameRateLimit = 30;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl->EncodingInterval = 1;
    trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl->BitrateLimit = 500;

    5、GetVideoSourceConfiguration和GetVideoEncoderConfiguration

    int __trt__GetVideoSourceConfiguration(struct soap* soap, struct _trt__GetVideoSourceConfiguration *trt__GetVideoSourceConfiguration, struct _trt__GetVideoSourceConfigurationResponse *trt__GetVideoSourceConfigurationResponse)
    {
    DBG("__trt__GetVideoSourceConfiguration\n");
    //該函數必要,live video需要
    return SOAP_OK;
    }

    int __trt__GetVideoEncoderConfiguration(struct soap* soap, struct _trt__GetVideoEncoderConfiguration *trt__GetVideoEncoderConfiguration, struct _trt__GetVideoEncoderConfigurationResponse *trt__GetVideoEncoderConfigurationResponse)
    {
    DBG("__trt__GetVideoEncoderConfiguration\n");
    return SOAP_OK;
    }

    6、GetVideoEncoderConfigurationOptions

    int __trt__GetVideoEncoderConfigurationOptions(struct soap* soap, struct _trt__GetVideoEncoderConfigurationOptions *trt__GetVideoEncoderConfigurationOptions, struct _trt__GetVideoEncoderConfigurationOptionsResponse *trt__GetVideoEncoderConfigurationOptionsResponse)
    {
    DBG("__trt__GetVideoEncoderConfigurationOptions\n");
    //該函數必要,video streaming需要
    return SOAP_OK;
    }

    以上5、6不分的代碼直接返回SOAP_OK即可,正常來說是應該填充的,這里不影響RTSP Video Stream,暫時就不去動它

    四、運行live555MediaServer服務器

    ?

    live555官網有很多測試文件,我這里用的是MPEG4的測試文件路勁為rtsp://192.168.1.201/petrov.m4e

    五、啟動Onvif Device Manager測試

    有一個問題,OnvifDeviceManager的并不能自動發現設備(OnvifTestTool可以),還好它提供了手動添加功能

    單擊add,添加如下內容:http://192.168.1.233/onvif/device_service

    注意,我在程序中固定了兩個IP:linux192.168.1.233,windows:192.168.1.201,這里需看情況修改

    測試截圖:

    1、Live video

    2、Video streaming

    3、Profiles

    最后是運行的live555 rtsp服務器

    終端打印的DEBUG信息

    源代碼下載地址:http://download.csdn.net/detail/ghostyu/4796093

    http://blog.csdn.net/ghostyu/article/details/8208428

    http://blog.chinaunix.net/uid-23381466-id-3799058.html

    總結

    以上是生活随笔為你收集整理的onvif 开发之video streamer---onvif实现功能和经验的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。