OpenGL ES_手把手教你打造VR全景播放器
OpenGL ES _ 入門_01
OpenGL ES _ 入門_02
OpenGL ES _ 入門_03
OpenGL ES _ 入門_04
OpenGL ES _ 入門_05
OpenGL ES _ 入門練習_01
OpenGL ES _ 入門練習_02
OpenGL ES _ 入門練習_03
OpenGL ES _ 入門練習_04
OpenGL ES _ 入門練習_05
OpenGL ES _ 入門練習_06
OpenGL ES _ 著色器 _ 介紹
OpenGL ES _ 著色器 _ 程序
OpenGL ES _ 著色器 _ 語法
OpenGL ES_著色器_紋理圖像
OpenGL ES_著色器_預處理
OpenGL ES_著色器_頂點著色器詳解
OpenGL ES_著色器_片斷著色器詳解
OpenGL ES_著色器_實戰(zhàn)01
OpenGL ES_著色器_實戰(zhàn)02
OpenGL ES_著色器_實戰(zhàn)03
實戰(zhàn)2中,詳細介紹了多屏顯示的原理和實現(xiàn)過程,今天我們繼續(xù)我們的OpenGL 旅程!技術(shù)再牛逼也要學習! 學習是一件開心的額事情
學習目標
打造全景視頻,以及VR 眼鏡專用的雙屏顯示框架!
你應(yīng)該知道的
網(wǎng)絡(luò)截圖- 全景顯示的原理
通俗的將,好比紅色區(qū)域就是你的手機屏幕,當你旋轉(zhuǎn)手機的時候,我們球體向相反的方向旋轉(zhuǎn),這樣,你就可以看到球體上的畫面了.
準備工作
找一個全景視頻,添加到項目中去。
- 實現(xiàn)步驟
1.創(chuàng)建一個球體模型
2.獲取視頻數(shù)據(jù)的每一幀數(shù)據(jù) 轉(zhuǎn)換成RGB 格式,渲染到球體上
3.通過手勢的變換,改變球體模型視圖矩陣值
4.如果是VR模式,則通過角度傳感器獲取用戶的行為,調(diào)整視圖矩陣。
實現(xiàn)了那些功能
- 支持普通視頻播放
- 支持全景視頻播放
- 支持VR 雙屏顯示模式
- 支持快進,快退
- 支持播放,暫停
- 支持暫停廣告功能
核心代碼講解
如果你想要和我一樣,能夠從零開始把代碼敲出來,請確保自己有OpenGL ES 2.0 的基礎(chǔ)知識 和 GLSL 的簡單基本知識,如果你不具備這方面的知識,沒關(guān)系,我已經(jīng)寫好了OpenGL學習教程和GLSL教程,請移步開始學習。下面開始我們的內(nèi)容講解.
- 視頻采集
<p>工程中的兩個文件 XJVRPlayerViewController.h和XJVRPlayerController.m主要負責視頻數(shù)據(jù)采集,界面布局在XJVRPlayerViewController中可以更改,主要使用AVFoundation框架這部分內(nèi)容今天咱不講解,后面我會寫關(guān)于視頻采集的教程</p>
模型創(chuàng)建
a.全景播放器生成球體的頂點坐標和紋理坐標
b.普通播放器生成長方形的頂點坐標和紋理坐標
兩個生成函數(shù)在OSShere.h中-
將數(shù)據(jù)加載到GPU中去
// 加載頂點索引數(shù)據(jù)glGenBuffers(1, &_indexBuffer);glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);glBufferData(GL_ELEMENT_ARRAY_BUFFER, _numIndices*sizeof(GLushort), _indices, GL_STATIC_DRAW);// 加載頂點坐標 glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, numVertices*strideNum*sizeof(GLfloat), _vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, strideNum, GL_FLOAT, GL_FALSE, strideNum*sizeof(GLfloat), NULL);//加載紋理坐標 glGenBuffers(1, &_textureCoordBuffer); glBindBuffer(GL_ARRAY_BUFFER, _textureCoordBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*2*numVertices, _texCoords, GL_DYNAMIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), NULL);
以上函數(shù)的具體用法,在之前的教程中都講過,這里就不贅述了。
-
著色器程序
attribute vec4 position; // 頂點坐標屬性 attribute vec2 texCoord0;// 紋理坐標 varying vec2 texCoordVarying;// 片段著色器輸入變量,負責獲取紋理坐標的值 uniform mat4 modelViewProjectionMatrix;//視圖變換矩陣 void main (){ texCoordVarying = texCoord0; gl_Position = modelViewProjectionMatrix*position; }
我把著色器分為兩種類型,一種是渲染全景視頻的,一種是渲染普通視頻的,兩個沒有多大區(qū)別,只是在全景著色器中添加了一個視圖轉(zhuǎn)換矩陣 (全景著色器:ShadePanorama,普通著色器:ShaderNormal)
下面給出的是全景的著色器的代碼:
a.頂點著色器
b.片段著色器
precision mediump float;//設(shè)置float精度varying vec2 texCoordVarying;uniform sampler2D sam2DY; // 紋理采樣器Yuniform sampler2D sam2DUV;// 紋理采樣器UVvoid main(){mediump vec3 yuv;lowp vec3 rob;// YUV 轉(zhuǎn)RGB 的轉(zhuǎn)換矩陣 mediump mat3 convert = mat3(1.164, 1.164, 1.164,0.0, -0.213, 2.112,1.793, -0.533, 0.0);yuv.x = texture2D(sam2DY,texCoordVarying).r - (16.0/255.0);yuv.yz = texture2D(sam2DUV,texCoordVarying).rg - vec2(0.5, 0.5);rgb = convert*yuv;gl_FragColor = vec4(rgb,1);}如果想要了解更多關(guān)于著色器語言的知識,請猛戳我
-
創(chuàng)建著色器程序
/*** 創(chuàng)建編譯shader程序** @param vshName 頂點著色器文件名稱* @param fshName 片段著色器文件名稱*/-(void)createShaderProgramVertexShaderName: (NSString*)vshName FragmentShaderName: (NSString*)fshName{self.shaderManager = [[OSShaderManager alloc]init];// 編譯連個shader 文件GLuint vertexShader,fragmentShader;NSURL *vertexShaderPath = [[NSBundle mainBundle]URLForResource:vshName withExtension:@"vsh"];NSURL *fragmentShaderPath = [[NSBundle mainBundle]URLForResource:fshName withExtension:@"fsh"];if (![self.shaderManager compileShader:&vertexShader type:GL_VERTEX_SHADER URL:vertexShaderPath]||! [self.shaderManager compileShader:&fragmentShader type:GL_FRAGMENT_SHADER URL:fragmentShaderPath]){return ;}// 注意獲取綁定屬性要在連接程序之前 location 隨便你寫,如果你隨便寫請記住他,后面要用到[self.shaderManager bindAttribLocation:GLKVertexAttribPosition andAttribName:"position"];[self.shaderManager bindAttribLocation:GLKVertexAttribTexCoord0 andAttribName:"texCoord0"];// 將編譯好的兩個對象和著色器程序進行連接if(![self.shaderManager linkProgram]){[self.shaderManager deleteShader:&vertexShader];[self.shaderManager deleteShader:&fragmentShader];}_textureBufferY = [self.shaderManager getUniformLocation:"sam2DY"];_textureBufferUV = [self.shaderManager getUniformLocation:"sam2DUV"];_modelViewProjectionMatrixIndex = [self.shaderManager getUniformLocation:"modelViewProjectionMatrix"];[self.shaderManager detachAndDeleteShader:&vertexShader];[self.shaderManager detachAndDeleteShader:&fragmentShader];// 啟用著色器[self.shaderManager useProgram];}
創(chuàng)建著色器程序的目的是編譯剛才我們編寫好的著色器源代碼,以及將著色器的變量和我們的應(yīng)用程序代碼相關(guān)聯(lián)
// 上面的OSShaderManager 這個類,我把著色器程序編譯鏈接的一些方法簡單的封裝了一下,具體的方向看下面
/*** 編譯shader程序* @param shader shader名稱* @param type shader 類型* @param URL shader 本地路徑* @return 是否編譯成功*/- (BOOL)compileShader:(GLuint *)shader type:(GLenum)type URL:(NSURL *)URL;/*** 連接程序* @return 連接程序是否成功*/- (BOOL)linkProgram;/*** 驗證程序是否成功* @param prog 程序標示* @return 返回是否成功標志*/- (BOOL)validateProgram;/*** 綁定著色器的屬性* @param index 屬性在shader 程序的索引位置* @param name 屬性名稱*/- (void)bindAttribLocation:(GLuint)index andAttribName: (GLchar*)name;/*** 刪除shader*/- (void)deleteShader:(GLuint*)shader;/*** 獲取屬性值索引位置* @param name 屬性名稱* @return 返回索引位置*/- (GLint)getUniformLocation:(const GLchar*) name;/*** 釋放, 刪除shader* @param shader 著色器名稱*/-(void)detachAndDeleteShader:(GLuint*)shader;/*** 使用程序*/-(void)useProgram;方法的具體實現(xiàn)請閱讀工程文件
-
紋理采樣器指向
glUniform1i(_textureBufferY, 0); // 0 代表GL_TEXTURE0 glUniform1i(_textureBufferUV, 1); // 1 代表GL_TEXTURE1
在這里我有必要提醒你,這兩個方法,一定要放在著色器程序鏈接成功之后,不然你調(diào)用這個兩個方法,沒有效果。
-
如何將YUV 數(shù)據(jù)分離,并且加載到兩個著色器中去, 這里我們又要用到之前我們使用過的框架了CoreVideo. 干涉么的呢,專門處理我們的像素數(shù)據(jù)的。我們從視頻采集到的視頻是CVPixelBufferRef 類型的
<CVPixelBuffer 0x7fa27962c9c0 width=2048 height=1024 pixelFormat=420v iosurface=0x0 planes=2><Plane 0 width=2048 height=1024 bytesPerRow=2048><Plane 1 width=1024 height=512 bytesPerRow=2048><attributes=<CFBasicHash 0x7fa279623910 [0x10296ba40]>{type = immutable dict, count = 4,entries => 1 : <CFString 0x102d183b8 [0x10296ba40]>{contents = "PixelFormatType"} = <CFArray 0x7fa27c414bc0 [0x10296ba40]>{type = mutable-small, count = 1, values = ( 0 : <CFNumber 0xb000000343230763 [0x10296ba40]>{value = +875704438, type = kCFNumberSInt64Type} )} 2 : <CFString 0x102d17e78 [0x10296ba40]>{contents = "Height"} = <CFNumber 0xb000000000004002 [0x10296ba40]>{value = +1024, type = kCFNumberSInt32Type} 5 : <CFString 0x102d17d38 [0x10296ba40]>{contents = "PropagatedAttachments"} = <CFBasicHash 0x7fa27c51c590 [0x10296ba40]>{type = mutable dict, count = 4,entries => 0 : <CFString 0x102d18058 [0x10296ba40]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x102d18098 [0x10296ba40]>{contents = "ITU_R_601_4"} 1 : <CFString 0x102d181b8 [0x10296ba40]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x102d18078 [0x10296ba40]>{contents = "ITU_R_709_2"} 2 : <CFString 0x106eadc88 [0x10296ba40]>{contents = "ColorInfoGuessedBy"} = <CFString 0x106eadca8 [0x10296ba40]>{contents = "VideoToolbox"} 5 : <CFString 0x102d18138 [0x10296ba40]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x102d18178 [0x10296ba40]>{contents = "SMPTE_C"}} 6 : <CFString 0x102d17e58 [0x10296ba40]>{contents = "Width"} = <CFNumber 0xb000000000008002 [0x10296ba40]>{value = +2048, type = kCFNumberSInt32Type}}propagatedAttachments=<CFBasicHash 0x7fa27962caa0 [0x10296ba40]>{type = mutable dict, count = 10,entries => 0 : <CFString 0x106eadc88 [0x10296ba40]>{contents = "ColorInfoGuessedBy"} = <CFString 0x106eadca8 [0x10296ba40]>{contents = "VideoToolbox"} 1 : <CFString 0x102d18058 [0x10296ba40]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x102d18098 [0x10296ba40]>{contents = "ITU_R_601_4"} 2 : <CFString 0x102d17ed8 [0x10296ba40]>{contents = "CVFieldCount"} = <CFNumber 0xb000000000000012 [0x10296ba40]>{value = +1, type = kCFNumberSInt32Type} 3 : <CFString 0x102d17f98 [0x10296ba40]>{contents = "CVPixelAspectRatio"} = <CFBasicHash 0x7fa279728c10 [0x10296ba40]>{type = immutable dict, count = 2,entries => 1 : <CFString 0x102d17fb8 [0x10296ba40]>{contents = "HorizontalSpacing"} = <CFNumber 0xb000000000000012 [0x10296ba40]>{value = +1, type = kCFNumberSInt32Type} 2 : <CFString 0x102d17fd8 [0x10296ba40]>{contents = "VerticalSpacing"} = <CFNumber 0xb000000000000012 [0x10296ba40]>{value = +1, type = kCFNumberSInt32Type}} 4 : <CFString 0x102d17d78 [0x10296ba40]>{contents = "QTMovieTime"} = <CFBasicHash 0x7fa27c51db40 [0x10296ba40]>{type = immutable dict, count = 2,entries => 0 : <CFString 0x102d17d98 [0x10296ba40]>{contents = "TimeValue"} = <CFNumber 0xb000000000000003 [0x10296ba40]>{value = +0, type = kCFNumberSInt64Type} 1 : <CFString 0x102d17db8 [0x10296ba40]>{contents = "TimeScale"} = <CFNumber 0xb000000000075302 [0x10296ba40]>{value = +30000, type = kCFNumberSInt32Type}} 5 : <CFString 0x102d18138 [0x10296ba40]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x102d18178 [0x10296ba40]>{contents = "SMPTE_C"} 8 : <CFString 0x102d181b8 [0x10296ba40]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x102d18078 [0x10296ba40]>{contents = "ITU_R_709_2"} 9 : <CFString 0x102d18318 [0x10296ba40]>{contents = "CVImageBufferChromaSubsampling"} = <CFString 0x102d18278 [0x10296ba40]>{contents = "TopLeft"} 10 : <CFString 0x102d18218 [0x10296ba40]>{contents = "CVImageBufferChromaLocationBottomField"} = <CFString 0x102d18338 [0x10296ba40]>{contents = "4:2:0"} 12 : <CFString 0x102d181f8 [0x10296ba40]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x102d18338 [0x10296ba40]>{contents = "4:2:0"}}nonPropagatedAttachments=<CFBasicHash 0x7fa27962ca60 [0x10296ba40]>{type = mutable dict, count = 0,entries => } >
下面我們先看一下我們像素數(shù)據(jù)的格式
我們從上面的日志輸出找到了下面的東西
<CVPixelBuffer 0x7fa27962c9c0 width=2048 height=1024 pixelFormat=420v iosurface=0x0 planes=2><Plane 0 width=2048 height=1024 bytesPerRow=2048><Plane 1 width=1024 height=512 bytesPerRow=2048>我們能得到的信息是:
像素格式: 420v
數(shù)據(jù)通道: 2 個
通道1: width=2048 height=1024
通道2: width=1024 height=512
從上面信息可以得出我們數(shù)據(jù)的排列方式為YY....YY....UV.....UV,
2048\1024 個Y 數(shù)據(jù),1024\512 從 bytesPerRow 可以看出每個Y、U、V 各占一個字節(jié).
接下來就是如何將數(shù)據(jù)加載到我們的紋理緩沖區(qū)去了
這個函數(shù)作用是: 通過CVImageBufferRef 創(chuàng)建一個紋理對象
allocator : 寫默認值就可以了 kCFAllocatorDefault
textureCache:我們需要手動創(chuàng)建一個紋理緩沖對象,
sourceImage:傳我們的CVImageBufferRef 數(shù)據(jù)
textureAttributes:紋理屬性,可以為NULL
target:紋理的類型(GL_TEXTURE_2D 和GL_RENDERBUFFER)
internalFormat:數(shù)據(jù)格式,就是這個數(shù)據(jù)步伐的意思
width:紋理的高度
height : 紋理的長度
format: 像素數(shù)據(jù)的格式
type: 數(shù)據(jù)類型
planeIndex: 通道索引
接下來看我們的代碼:
// 啟用紋理緩沖區(qū)0 glActiveTexture(GL_TEXTURE0); err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,_videoTextureCache,pixelBuffer,NULL,GL_TEXTURE_2D,GL_RED_EXT,width,height,GL_RED_EXT,GL_UNSIGNED_BYTE,0,&_lumaTexture); if (err) {NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);}glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture)); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);// UV-plane. // 啟用紋理緩沖區(qū)1 glActiveTexture(GL_TEXTURE1); err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,_videoTextureCache,pixelBuffer,NULL,GL_TEXTURE_2D,GL_RG_EXT,width /2,height /2,GL_RG_EXT,GL_UNSIGNED_BYTE,1,&_chromaTexture); if (err) {NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err); }glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture)); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_RED_EXT 代表 1位數(shù)據(jù) GL_RG_EXT 代表2位數(shù)據(jù) 。UV 就是兩位數(shù)據(jù) 所以我們選擇GL_RG_EXT。
剛才說了,參數(shù)中需要一個紋理緩沖TextureCacha,接下來我們就自己創(chuàng)建一個.
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, self.eagContext, NULL, &_videoTextureCache);以上基本的工作都做完了,接下來,我們就只剩下顯示了
-
渲染繪制
// 清除顏色緩沖區(qū)glClearColor(0, 0, 0, 1);glClear(GL_COLOR_BUFFER_BIT);if (_isVR){// 渲染雙屏glViewport(0, 0, self.view.bounds.size.width, self.view.bounds.size.height*2);glDrawElements(GL_TRIANGLES, _numIndices, GL_UNSIGNED_SHORT, 0);glViewport(self.view.bounds.size.width, 0, self.view.bounds.size.width, self.view.bounds.size.height*2);glDrawElements(GL_TRIANGLES, _numIndices, GL_UNSIGNED_SHORT, 0);}else{// 渲染單屏glViewport(0, 0, self.view.bounds.size.width*2, self.view.bounds.size.height*2);glDrawElements(GL_TRIANGLES, _numIndices, GL_UNSIGNED_SHORT, 0);}
到這里,視頻已經(jīng)可以顯示了。
-
視圖矩陣初始化
-(void)initModelViewProjectMatrix{ // 創(chuàng)建投影矩陣 float aspect = fabs(self.view.bounds.size.width / self.view.bounds.size.height); _projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(OSVIEW_CORNER), aspect, 0.1f, 400.0f); _projectionMatrix = GLKMatrix4Rotate(_projectionMatrix, ES_PI, 1.0f, 0.0f, 0.0f);// 創(chuàng)建模型矩陣 _modelViewMatrix = GLKMatrix4Identity; float scale = OSSphereScale; _modelViewMatrix = GLKMatrix4Scale(_modelViewMatrix, scale, scale, scale);// 最終傳入到GLSL中去的矩陣 _modelViewProjectionMatrix = GLKMatrix4Multiply(_projectionMatrix, _modelViewMatrix); glUniformMatrix4fv(_modelViewProjectionMatrixIndex, 1, GL_FALSE, _modelViewProjectionMatrix.m); } -
全景單屏模式
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { if(self.isVR || self.vedioType == OSNormal ) return; UITouch *touch = [touches anyObject]; float distX = [touch locationInView:touch.view].x - [touch previousLocationInView:touch.view].x; float distY = [touch locationInView:touch.view].y - [touch previousLocationInView:touch.view].y; distX *= -0.005; distY *= -0.005; self.fingerRotationX += distY * OSVIEW_CORNER / 100; self.fingerRotationY -= distX * OSVIEW_CORNER / 100; _modelViewMatrix = GLKMatrix4Identity; float scale = OSSphereScale; _modelViewMatrix = GLKMatrix4Scale(_modelViewMatrix, scale, scale, scale); _modelViewMatrix = GLKMatrix4RotateX(_modelViewMatrix, self.fingerRotationX); _modelViewMatrix = GLKMatrix4RotateY(_modelViewMatrix, self.fingerRotationY); _modelViewProjectionMatrix = GLKMatrix4Multiply(_projectionMatrix, _modelViewMatrix); glUniformMatrix4fv(_modelViewProjectionMatrixIndex, 1, GL_FALSE, _modelViewProjectionMatrix.m); }- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {if (self.isVR || self.vedioType == OSNormal) return;for (UITouch *touch in touches) {[self.currentTouches removeObject:touch];} } - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event { for (UITouch *touch in touches) {[self.currentTouches removeObject:touch]; } }
手勢操縱矩陣 -
全景 VR模式
-(void)startMotionManager{ self.motionManager = [[CMMotionManager alloc]init]; self.motionManager.deviceMotionUpdateInterval = 1.0 / 60.0; self.motionManager.gyroUpdateInterval = 1.0f / 60; self.motionManager.showsDeviceMovementDisplay = YES; [self.motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXArbitraryCorrectedZVertical]; self.referenceAttitude = nil; [self.motionManager startGyroUpdatesToQueue: [[NSOperationQueue alloc]init] withHandler:^(CMGyroData * _Nullable gyroData, NSError * _Nullable error) {if(self.isVR) {[self calculateModelViewProjectMatrixWithDeviceMotion:self.motionManager.deviceMotion];}}]; self.referenceAttitude = self.motionManager.deviceMotion.attitude; } -(void)calculateModelViewProjectMatrixWithDeviceMotion:(CMDeviceMotion*)deviceMotion{_modelViewMatrix = GLKMatrix4Identity; float scale = OSSphereScale; _modelViewMatrix = GLKMatrix4Scale(_modelViewMatrix, scale, scale, scale);if (deviceMotion != nil) {CMAttitude *attitude = deviceMotion.attitude;if (self.referenceAttitude != nil) {[attitude multiplyByInverseOfAttitude:self.referenceAttitude];} else {self.referenceAttitude = deviceMotion.attitude;}float cRoll = attitude.roll;float cPitch = attitude.pitch;_modelViewMatrix = GLKMatrix4RotateX(_modelViewMatrix, -cRoll);_modelViewMatrix = GLKMatrix4RotateY(_modelViewMatrix, -cPitch*3);_modelViewProjectionMatrix = GLKMatrix4Multiply(_projectionMatrix, _modelViewMatrix);// 下邊這個方法必須在主線程中完成.dispatch_async(dispatch_get_main_queue(), ^{glUniformMatrix4fv(_modelViewProjectionMatrixIndex, 1, GL_FALSE, _modelViewProjectionMatrix.m);});}}
使用角度傳感器
操作矩陣這里,暫時不想講,后面我會專門來講矩陣變換和角度傳感器的使用,因為這兩個東西在游戲和VR,還是AR的世界,都太重要了。今天先說的這里,給幾張展示圖欣賞一下。
全景模式下普通視頻 普通視頻雙屏展示 全景視頻VR模式 全景.gif
需要代碼在這里和這里
全景播放器-實現(xiàn)方案2
使用SceneKit 也可以實現(xiàn)全景播放器,需要了解的朋友請查看這里
加群了總結(jié)
以上是生活随笔為你收集整理的OpenGL ES_手把手教你打造VR全景播放器的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: android机器人 下载地址,机器人路
- 下一篇: IT机房运维技术五大体系