生活随笔
收集整理的這篇文章主要介紹了
Loadrunner通过ssh连接linux进行hadoop基准测试
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
版權(quán)聲明:本文為博主原創(chuàng)文章,未經(jīng)博主允許不得轉(zhuǎn)載。歡迎訪問(wèn)我的博客 https://blog.csdn.net/smooth00/article/details/73796622
Loadrunner通過(guò)ssh連接調(diào)用hadoop的測(cè)試Jar包進(jìn)行基準(zhǔn)測(cè)試,似乎有點(diǎn)討巧,而且好像實(shí)際意義也不是特別大,但是通過(guò)這個(gè)方法的展示,能夠看到Loadrunner的生命力有多大,而且在linux系統(tǒng)測(cè)試和開(kāi)源技術(shù)的測(cè)試中,也是能夠有用武之地,所以本文算是來(lái)個(gè)拋磚引玉吧。
1、在loadrunner中新建腳本(本文以LoadRunner11為例),要求選擇協(xié)議類型為Java->Java Vuser
2、在Run-time Settings設(shè)置JDK路徑,由于LoadRunner11不支持jdk1.8,本次測(cè)試是拷貝了一份低版本的JDK1.6,所以路徑選擇固定路徑模式(Use specified JDK),當(dāng)然也可以將JDK1.6配到環(huán)境變量中,LoadRunner就可以直接調(diào)了。
3、上網(wǎng)下載個(gè)jsch-0.1.41.jar,然后在LoadRunner中加載jsch的jar包
4、在Loadrunner中以Java Vuser協(xié)議創(chuàng)建腳本,腳本樣例如下:
/** LoadRunner Java script. (Build: _build_number_)* * Script Description: * */
import lrapi.lr;import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.Vector; import com.jcraft.jsch.Channel;
import com.jcraft.jsch.ChannelExec;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.JSchException;
import com.jcraft.jsch.Session;
import java.io.InputStream;public class Actions
{private String ipAddress; private String username; private String password; public static final int DEFAULT_SSH_PORT = 22; private Vector<String> stdout; private Session session=null;private Channel channel=null;public void SSHCommandExecutor(final String ipAddress, final String username, final String password) { this.ipAddress = ipAddress; this.username = username; this.password = password; stdout = new Vector<String>();JSch jsch = new JSch(); try{session = jsch.getSession(username, ipAddress, DEFAULT_SSH_PORT); session.setPassword(password);session.setConfig("StrictHostKeyChecking", "no");//session.setUserInfo(userInfo); session.connect(); // Create and connect channel. channel = session.openChannel("exec");} catch (JSchException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); }} public int CommandExecute(final String command) { int returnCode = 0; try { ((ChannelExec) channel).setCommand(command); channel.setInputStream(null); BufferedReader input = new BufferedReader(new InputStreamReader(channel .getInputStream())); channel.connect(); System.out.println("The remote command is: " + command); // Get the output of remote command. String line; while ((line = input.readLine()) != null) { stdout.add(line); } input.close(); // Get the return code only after the channel is closed. if (channel.isClosed()) { returnCode = channel.getExitStatus(); } // Disconnect the channel and session. //channel.disconnect(); //session.disconnect(); } catch (JSchException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return returnCode; } public int init() throws Throwable {SSHCommandExecutor("172.17.2.12", "root", "123456");return 0;}//end of initpublic int action() throws Throwable {lr.start_transaction("exe_command");//----------------------------------------寫入2個(gè)1M的文件---------------------------------/*String commandStr="yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.2.3.2.0-2950-tests.jar TestDFSIO -write -nrFiles 2 -fileSize 1";String commandLog="cat -b /root/TestDFSIO_results.log |tail -n10";*///--------------------------------------------TestDFSIO -clean---------------------------//---------------------使用12個(gè)mapper和6個(gè)reducer來(lái)創(chuàng)建12個(gè)文件----------------------------String commandStr="yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.2.3.2.0-2950-tests.jar nnbench \\";commandStr+="\n -operation create_write -maps 12 -reduces 6 -blockSize 1 \\";commandStr+="\n -bytesToWrite 0 -numberOfFiles 12 -replicationFactorPerFile 3 \\";commandStr+="\n -readFileAfterOpen true -baseDir /benchmarks/NNBench-`hostname -s`";String commandLog="cat -b /root/NNBench_results.log |tail -n30";// ---------------------------------------------------------------------------------------// ------------------------------重復(fù)運(yùn)行一個(gè)小作業(yè)10次,每次使用2個(gè)maps和1個(gè)reduces---------/*String commandStr="yarn jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.7.1.2.3.2.0-2950-tests.jar mrbench -numRuns 10 -maps 2 -reduces 1";String commandLog="";//無(wú)日志文件,屬于動(dòng)態(tài)輸出日志*///----------------------------------------------------------------------------------------CommandExecute(commandStr+" \n "+commandLog);//commandStr表示運(yùn)行hadoop的命令,commandLog表示要輸出的日志for (String str : stdout) { System.out.println(str); }lr.end_transaction("exe_command", lr.AUTO);return 0;}//end of actionpublic int end() throws Throwable {try{channel.disconnect();session.disconnect();}catch(Exception e){System.out.println(e);}return 0;}//end of end
}以上腳本的核心部分是CommandExecute,通過(guò)ssh連接linux執(zhí)行shell命令,一般情況下運(yùn)行成功后會(huì)輸出運(yùn)行日志,但是hadoop基礎(chǔ)測(cè)試包的調(diào)用有些日志是寫到后臺(tái)中的,所以可以通過(guò)加上cat -b /root/NNBench_results.log |tail -n30這樣的命令(具體需要準(zhǔn)確知道日志文件的所在路徑),來(lái)輸出當(dāng)前執(zhí)行成功后的日志(比如NNBench測(cè)試輸出的最后30行就是當(dāng)前日志)。
5、運(yùn)行腳本,測(cè)試通過(guò)的輸出如下所示:
6、對(duì)于Loadrunner通過(guò)C的方式遠(yuǎn)程連接linux,也是有方法的,具體可以參考別人的文章:
http://blog.csdn.net/hualusiyu/article/details/8530319
總結(jié)
以上是生活随笔為你收集整理的Loadrunner通过ssh连接linux进行hadoop基准测试的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。