日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

如何通过 Serverless 轻松识别验证码?

發(fā)布時(shí)間:2025/3/20 编程问答 52 豆豆
生活随笔 收集整理的這篇文章主要介紹了 如何通过 Serverless 轻松识别验证码? 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

作者 | 江昱
來(lái)源 | Serverless 公眾號(hào)

前言

Serverless 概念自被提出就倍受關(guān)注,尤其是近些年來(lái) Serverless 煥發(fā)出了前所未有的活力,各領(lǐng)域的工程師都在試圖將 Serverless 架構(gòu)與自身工作相結(jié)合,以獲取到 Serverless 架構(gòu)所帶來(lái)的“技術(shù)紅利”。

驗(yàn)證碼(CAPTCHA)是“Completely Automated Public Turing test to tell Computers and Humans Apart”(全自動(dòng)區(qū)分計(jì)算機(jī)和人類的圖靈測(cè)試)的縮寫,是一種區(qū)分用戶是計(jì)算機(jī)還是人的公共全自動(dòng)程序。可以防止惡意破解密碼、刷票、論壇灌水,有效防止某個(gè)黑客對(duì)某一個(gè)特定注冊(cè)用戶用特定程序暴力破解方式進(jìn)行不斷地登陸嘗試。實(shí)際上驗(yàn)證碼是現(xiàn)在很多網(wǎng)站通行的方式,我們利用比較簡(jiǎn)易的方式實(shí)現(xiàn)了這個(gè)功能。CAPTCHA 的問題由計(jì)算機(jī)生成并評(píng)判,但是這個(gè)問題只有人類才能解答,計(jì)算機(jī)是無(wú)法解答的,所以回答出問題的用戶就可以被認(rèn)為是人類。說(shuō)白了,驗(yàn)證碼就是用來(lái)驗(yàn)證的碼,驗(yàn)證是人訪問的還是機(jī)器訪問的“碼”。

**那么人工智能領(lǐng)域中的驗(yàn)證碼識(shí)別與 Serverless 架構(gòu)會(huì)碰撞出哪些火花呢?**本文將通過 Serverless 架構(gòu)和卷積神經(jīng)網(wǎng)絡(luò)(CNN)算法,實(shí)現(xiàn)驗(yàn)證碼識(shí)別功能。

淺談驗(yàn)證碼

驗(yàn)證碼的發(fā)展,可以說(shuō)是非常迅速的,從開始的單純數(shù)字驗(yàn)證碼,到后來(lái)的數(shù)字+字母驗(yàn)證碼,再到后來(lái)的數(shù)字+字母+中文的驗(yàn)證碼以及圖形圖像驗(yàn)證碼,單純的驗(yàn)證碼素材已經(jīng)越來(lái)越多了。從驗(yàn)證碼的形態(tài)來(lái)看,也是各不相同,輸入、點(diǎn)擊、拖拽以及短信驗(yàn)證碼、語(yǔ)音驗(yàn)證碼……

Bilibili 的登錄驗(yàn)證碼就包括了多種模式,例如滑動(dòng)滑塊進(jìn)行驗(yàn)證:

例如,通過依次點(diǎn)擊文字進(jìn)行驗(yàn)證:

而百度貼吧、知乎、以及 Google 等相關(guān)網(wǎng)站的驗(yàn)證碼又各不相同,例如選擇正著寫的文字、選擇包括指定物體的圖片以及按順序點(diǎn)擊圖片中的字符等。

驗(yàn)證碼的識(shí)別可能會(huì)根據(jù)驗(yàn)證碼的類型而不太一致,當(dāng)然最簡(jiǎn)單的驗(yàn)證碼可能就是最原始的文字驗(yàn)證碼了:

即便是文字驗(yàn)證碼,也是存在很多差異的,例如簡(jiǎn)單的數(shù)字驗(yàn)證碼、簡(jiǎn)單的數(shù)字+字母驗(yàn)證碼、文字驗(yàn)證碼、驗(yàn)證碼中包括計(jì)算、簡(jiǎn)單驗(yàn)證碼中增加一些干擾成為復(fù)雜驗(yàn)證碼等。

驗(yàn)證碼識(shí)別

1. 簡(jiǎn)單驗(yàn)證碼識(shí)別

驗(yàn)證碼識(shí)別是一個(gè)古老的研究領(lǐng)域,簡(jiǎn)單說(shuō)就是把圖片上的文字轉(zhuǎn)化為文本的過程。最近幾年,隨著大數(shù)據(jù)的發(fā)展,廣大爬蟲工程師在對(duì)抗反爬策略時(shí),對(duì)驗(yàn)證碼的識(shí)別要求也越來(lái)越高。在簡(jiǎn)單驗(yàn)證碼的時(shí)代,驗(yàn)證碼的識(shí)別主要是針對(duì)文本驗(yàn)證碼,通過圖像的切割,對(duì)驗(yàn)證碼每一部分進(jìn)行裁剪,然后再對(duì)每個(gè)裁剪單元進(jìn)行相似度對(duì)比,獲得最可能的結(jié)果,最后進(jìn)行拼接,例如將驗(yàn)證碼:

進(jìn)行二值化等操作:

完成之后再進(jìn)行切割:

切割完成再進(jìn)行識(shí)別,最后進(jìn)行拼接,這樣的做法是,針對(duì)每個(gè)字符進(jìn)行識(shí)別,相對(duì)來(lái)說(shuō)是比較容易的。

但是隨著時(shí)間的發(fā)展,在這種簡(jiǎn)單驗(yàn)證碼逐漸無(wú)法滿足判斷“是人還是機(jī)器”的問題時(shí),驗(yàn)證碼進(jìn)行了一次小升級(jí),即驗(yàn)證碼上面增加了一些干擾線,或者驗(yàn)證碼進(jìn)行了嚴(yán)重的扭曲,增加了強(qiáng)色塊干擾,例如 Dynadot 網(wǎng)站的驗(yàn)證碼:

不僅有圖像扭曲重疊,還有干擾線和色塊干擾。這個(gè)時(shí)候想要識(shí)別驗(yàn)證碼,簡(jiǎn)單的切割識(shí)別就很難獲得良好的效果了,這時(shí)通過深度學(xué)習(xí)反而可以獲得不錯(cuò)的效果。

2. 基于 CNN 的驗(yàn)證碼識(shí)別

卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Network,簡(jiǎn)稱 CNN),是一種前饋神經(jīng)網(wǎng)絡(luò),人工神經(jīng)元可以響應(yīng)周圍單元,進(jìn)行大型圖像處理。卷積神經(jīng)網(wǎng)絡(luò)包括卷積層和池化層。

如圖所示,左圖是傳統(tǒng)的神經(jīng)網(wǎng)絡(luò),其基本結(jié)構(gòu)是:輸入層、隱含層、輸出層。右圖則是卷積神經(jīng)網(wǎng)絡(luò),其結(jié)構(gòu)由輸入層、輸出層、卷積層、池化層、全連接層構(gòu)成。卷積神經(jīng)網(wǎng)絡(luò)其實(shí)是神經(jīng)網(wǎng)絡(luò)的一種拓展,而事實(shí)上從結(jié)構(gòu)上來(lái)說(shuō),樸素的 CNN 和樸素的 NN 沒有任何區(qū)別(當(dāng)然,引入了特殊結(jié)構(gòu)的、復(fù)雜的 CNN 會(huì)和 NN 有著比較大的區(qū)別)。相對(duì)于傳統(tǒng)神經(jīng)網(wǎng)絡(luò),CNN 在實(shí)際效果中讓我們的網(wǎng)絡(luò)參數(shù)數(shù)量大大地減少,這樣我們可以用較少的參數(shù),訓(xùn)練出更加好的模型,典型的事半功倍,而且可以有效地避免過擬合。同樣,由于 filter 的參數(shù)共享,即使圖片進(jìn)行了一定的平移操作,我們照樣可以識(shí)別出特征,這叫做 “平移不變性”。因此,模型就更加穩(wěn)健了。

1)驗(yàn)證碼生成

驗(yàn)證碼的生成是非常重要的一個(gè)步驟,因?yàn)檫@一部分的驗(yàn)證碼將會(huì)作為我們的訓(xùn)練集和測(cè)試集,同時(shí)最終我們的模型可以識(shí)別什么類型的驗(yàn)證碼,也是和這部分有關(guān)。

# coding:utf-8 import random import numpy as np from PIL import Image from captcha.image import ImageCaptcha CAPTCHA_LIST = [eve for eve in "0123456789abcdefghijklmnopqrsruvwxyzABCDEFGHIJKLMOPQRSTUVWXYZ"] CAPTCHA_LEN = 4 # 驗(yàn)證碼長(zhǎng)度 CAPTCHA_HEIGHT = 60 # 驗(yàn)證碼高度 CAPTCHA_WIDTH = 160 # 驗(yàn)證碼寬度 randomCaptchaText = lambda char=CAPTCHA_LIST, size=CAPTCHA_LEN: "".join([random.choice(char) for _ in range(size)]) def genCaptchaTextImage(width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT, save=None):image = ImageCaptcha(width=width, height=height)captchaText = randomCaptchaText()if save:image.write(captchaText, './img/%s.jpg' % captchaText)return captchaText, np.array(Image.open(image.generate(captchaText))) print(genCaptchaTextImage(save=True))

通過上述代碼,可以生成簡(jiǎn)單的中英文驗(yàn)證碼:![image.gif](https://img-blog.csdnimg.cn/img_convert/975000d9b56c35a503d6dea82b226a8b.gif#align=left&display=inline&height=1&margin=[object Object]&name=image.gif&originHeight=1&originWidth=1&size=70&status=done&style=none&width=1)

2)模型訓(xùn)練

模型訓(xùn)練的代碼如下(部分代碼來(lái)自網(wǎng)絡(luò))。

util.py 文件,主要是一些提取出來(lái)的公有方法:

# -*- coding:utf-8 -*- import numpy as np from captcha_gen import genCaptchaTextImage from captcha_gen import CAPTCHA_LIST, CAPTCHA_LEN, CAPTCHA_HEIGHT, CAPTCHA_WIDTH # 圖片轉(zhuǎn)為黑白,3維轉(zhuǎn)1維 convert2Gray = lambda img: np.mean(img, -1) if len(img.shape) > 2 else img # 驗(yàn)證碼向量轉(zhuǎn)為文本 vec2Text = lambda vec, captcha_list=CAPTCHA_LIST: ''.join([captcha_list[int(v)] for v in vec]) def text2Vec(text, captchaLen=CAPTCHA_LEN, captchaList=CAPTCHA_LIST):"""驗(yàn)證碼文本轉(zhuǎn)為向量"""vector = np.zeros(captchaLen * len(captchaList))for i in range(len(text)):vector[captchaList.index(text[i]) + i * len(captchaList)] = 1return vector def getNextBatch(batchCount=60, width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT):"""獲取訓(xùn)練圖片組"""batchX = np.zeros([batchCount, width * height])batchY = np.zeros([batchCount, CAPTCHA_LEN * len(CAPTCHA_LIST)])for i in range(batchCount):text, image = genCaptchaTextImage()image = convert2Gray(image)# 將圖片數(shù)組一維化 同時(shí)將文本也對(duì)應(yīng)在兩個(gè)二維組的同一行batchX[i, :] = image.flatten() / 255batchY[i, :] = text2Vec(text)return batchX, batchY # print(getNextBatch(batch_count=1))

model_train.py 文件,主要是進(jìn)行模型訓(xùn)練。在該文件中,定義了模型的基本信息,例如該模型是三層卷積神經(jīng)網(wǎng)絡(luò),原始圖像大小是 60160,在第一次卷積后變?yōu)?60160, 第一池化后變?yōu)?3080;第二次卷積后變?yōu)?3080 ,第二次池化后變?yōu)?1540;第三次卷積后變?yōu)? 1540 ,第三次池化后變?yōu)?20。經(jīng)過三次卷積和池化后,原始圖片數(shù)據(jù)變?yōu)?720 的平面數(shù)據(jù),同時(shí)項(xiàng)目在進(jìn)行訓(xùn)練的時(shí)候,每隔 100 次進(jìn)行一次數(shù)據(jù)測(cè)試,計(jì)算一次準(zhǔn)確度:

# -*- coding:utf-8 -*- import tensorflow.compat.v1 as tf from datetime import datetime from util import getNextBatch from captcha_gen import CAPTCHA_HEIGHT, CAPTCHA_WIDTH, CAPTCHA_LEN, CAPTCHA_LIST tf.compat.v1.disable_eager_execution() variable = lambda shape, alpha=0.01: tf.Variable(alpha * tf.random_normal(shape)) conv2d = lambda x, w: tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME') maxPool2x2 = lambda x: tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') optimizeGraph = lambda y, y_conv: tf.train.AdamOptimizer(1e-3).minimize(tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_conv))) hDrop = lambda image, weight, bias, keepProb: tf.nn.dropout(maxPool2x2(tf.nn.relu(conv2d(image, variable(weight, 0.01)) + variable(bias, 0.1))), keepProb) def cnnGraph(x, keepProb, size, captchaList=CAPTCHA_LIST, captchaLen=CAPTCHA_LEN):"""三層卷積神經(jīng)網(wǎng)絡(luò)"""imageHeight, imageWidth = sizexImage = tf.reshape(x, shape=[-1, imageHeight, imageWidth, 1])hDrop1 = hDrop(xImage, [3, 3, 1, 32], [32], keepProb)hDrop2 = hDrop(hDrop1, [3, 3, 32, 64], [64], keepProb)hDrop3 = hDrop(hDrop2, [3, 3, 64, 64], [64], keepProb)# 全連接層imageHeight = int(hDrop3.shape[1])imageWidth = int(hDrop3.shape[2])wFc = variable([imageHeight * imageWidth * 64, 1024], 0.01) # 上一層有64個(gè)神經(jīng)元 全連接層有1024個(gè)神經(jīng)元bFc = variable([1024], 0.1)hDrop3Re = tf.reshape(hDrop3, [-1, imageHeight * imageWidth * 64])hFc = tf.nn.relu(tf.matmul(hDrop3Re, wFc) + bFc)hDropFc = tf.nn.dropout(hFc, keepProb)# 輸出層wOut = variable([1024, len(captchaList) * captchaLen], 0.01)bOut = variable([len(captchaList) * captchaLen], 0.1)yConv = tf.matmul(hDropFc, wOut) + bOutreturn yConv def accuracyGraph(y, yConv, width=len(CAPTCHA_LIST), height=CAPTCHA_LEN):"""偏差計(jì)算圖,正確值和預(yù)測(cè)值,計(jì)算準(zhǔn)確度"""maxPredictIdx = tf.argmax(tf.reshape(yConv, [-1, height, width]), 2)maxLabelIdx = tf.argmax(tf.reshape(y, [-1, height, width]), 2)correct = tf.equal(maxPredictIdx, maxLabelIdx) # 判斷是否相等return tf.reduce_mean(tf.cast(correct, tf.float32)) def train(height=CAPTCHA_HEIGHT, width=CAPTCHA_WIDTH, ySize=len(CAPTCHA_LIST) * CAPTCHA_LEN):"""cnn訓(xùn)練"""accRate = 0.95x = tf.placeholder(tf.float32, [None, height * width])y = tf.placeholder(tf.float32, [None, ySize])keepProb = tf.placeholder(tf.float32)yConv = cnnGraph(x, keepProb, (height, width))optimizer = optimizeGraph(y, yConv)accuracy = accuracyGraph(y, yConv)saver = tf.train.Saver()with tf.Session() as sess:sess.run(tf.global_variables_initializer()) # 初始化step = 0 # 步數(shù)while True:batchX, batchY = getNextBatch(64)sess.run(optimizer, feed_dict={x: batchX, y: batchY, keepProb: 0.75})# 每訓(xùn)練一百次測(cè)試一次if step % 100 == 0:batchXTest, batchYTest = getNextBatch(100)acc = sess.run(accuracy, feed_dict={x: batchXTest, y: batchYTest, keepProb: 1.0})print(datetime.now().strftime('%c'), ' step:', step, ' accuracy:', acc)# 準(zhǔn)確率滿足要求,保存模型if acc > accRate:modelPath = "./model/captcha.model"saver.save(sess, modelPath, global_step=step)accRate += 0.01if accRate > 0.90:breakstep = step + 1 train()

當(dāng)完成了這部分之后,我們可以通過本地機(jī)器對(duì)模型進(jìn)行訓(xùn)練,為了提升訓(xùn)練速度,我將代碼中的 accRate 部分設(shè)置為:

if accRate > 0.90:break

也就是說(shuō),當(dāng)準(zhǔn)確率超過 90% 之后,系統(tǒng)就會(huì)自動(dòng)停止,并且保存模型。

接下來(lái)可以進(jìn)行訓(xùn)練:

訓(xùn)練時(shí)間可能會(huì)比較長(zhǎng),訓(xùn)練完成之后,可以根據(jù)結(jié)果繪圖,查看隨著 Step 的增加,準(zhǔn)確率的變化曲線:

橫軸表示訓(xùn)練的 Step,縱軸表示準(zhǔn)確率

3. 基于 Serverless 架構(gòu)的驗(yàn)證碼識(shí)別

將上面的代碼部分進(jìn)行進(jìn)一步整合,按照函數(shù)計(jì)算的規(guī)范進(jìn)行編碼:

# -*- coding:utf-8 -*- # 核心后端服務(wù) import base64 import json import uuid import tensorflow as tf import random import numpy as np from PIL import Image from captcha.image import ImageCaptcha # Response class Response:def __init__(self, start_response, response, errorCode=None):self.start = start_responseresponseBody = {'Error': {"Code": errorCode, "Message": response},} if errorCode else {'Response': response}# 默認(rèn)增加uuid,便于后期定位responseBody['ResponseId'] = str(uuid.uuid1())print("Response: ", json.dumps(responseBody))self.response = json.dumps(responseBody)def __iter__(self):status = '200'response_headers = [('Content-type', 'application/json; charset=UTF-8')]self.start(status, response_headers)yield self.response.encode("utf-8") CAPTCHA_LIST = [eve for eve in "0123456789abcdefghijklmnopqrsruvwxyzABCDEFGHIJKLMOPQRSTUVWXYZ"] CAPTCHA_LEN = 4 # 驗(yàn)證碼長(zhǎng)度 CAPTCHA_HEIGHT = 60 # 驗(yàn)證碼高度 CAPTCHA_WIDTH = 160 # 驗(yàn)證碼寬度 # 隨機(jī)字符串 randomStr = lambda num=5: "".join(random.sample('abcdefghijklmnopqrstuvwxyz', num)) randomCaptchaText = lambda char=CAPTCHA_LIST, size=CAPTCHA_LEN: "".join([random.choice(char) for _ in range(size)]) # 圖片轉(zhuǎn)為黑白,3維轉(zhuǎn)1維 convert2Gray = lambda img: np.mean(img, -1) if len(img.shape) > 2 else img # 驗(yàn)證碼向量轉(zhuǎn)為文本 vec2Text = lambda vec, captcha_list=CAPTCHA_LIST: ''.join([captcha_list[int(v)] for v in vec]) variable = lambda shape, alpha=0.01: tf.Variable(alpha * tf.random_normal(shape)) conv2d = lambda x, w: tf.nn.conv2d(x, w, strides=[1, 1, 1, 1], padding='SAME') maxPool2x2 = lambda x: tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') optimizeGraph = lambda y, y_conv: tf.train.AdamOptimizer(1e-3).minimize(tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_conv))) hDrop = lambda image, weight, bias, keepProb: tf.nn.dropout(maxPool2x2(tf.nn.relu(conv2d(image, variable(weight, 0.01)) + variable(bias, 0.1))), keepProb) def genCaptchaTextImage(width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT, save=None):image = ImageCaptcha(width=width, height=height)captchaText = randomCaptchaText()if save:image.write(captchaText, save)return captchaText, np.array(Image.open(image.generate(captchaText))) def text2Vec(text, captcha_len=CAPTCHA_LEN, captcha_list=CAPTCHA_LIST):"""驗(yàn)證碼文本轉(zhuǎn)為向量"""vector = np.zeros(captcha_len * len(captcha_list))for i in range(len(text)):vector[captcha_list.index(text[i]) + i * len(captcha_list)] = 1return vector def getNextBatch(batch_count=60, width=CAPTCHA_WIDTH, height=CAPTCHA_HEIGHT):"""獲取訓(xùn)練圖片組"""batch_x = np.zeros([batch_count, width * height])batch_y = np.zeros([batch_count, CAPTCHA_LEN * len(CAPTCHA_LIST)])for i in range(batch_count):text, image = genCaptchaTextImage()image = convert2Gray(image)# 將圖片數(shù)組一維化 同時(shí)將文本也對(duì)應(yīng)在兩個(gè)二維組的同一行batch_x[i, :] = image.flatten() / 255batch_y[i, :] = text2Vec(text)return batch_x, batch_y def cnnGraph(x, keepProb, size, captchaList=CAPTCHA_LIST, captchaLen=CAPTCHA_LEN):"""三層卷積神經(jīng)網(wǎng)絡(luò)"""imageHeight, imageWidth = sizexImage = tf.reshape(x, shape=[-1, imageHeight, imageWidth, 1])hDrop1 = hDrop(xImage, [3, 3, 1, 32], [32], keepProb)hDrop2 = hDrop(hDrop1, [3, 3, 32, 64], [64], keepProb)hDrop3 = hDrop(hDrop2, [3, 3, 64, 64], [64], keepProb)# 全連接層imageHeight = int(hDrop3.shape[1])imageWidth = int(hDrop3.shape[2])wFc = variable([imageHeight * imageWidth * 64, 1024], 0.01) # 上一層有64個(gè)神經(jīng)元 全連接層有1024個(gè)神經(jīng)元bFc = variable([1024], 0.1)hDrop3Re = tf.reshape(hDrop3, [-1, imageHeight * imageWidth * 64])hFc = tf.nn.relu(tf.matmul(hDrop3Re, wFc) + bFc)hDropFc = tf.nn.dropout(hFc, keepProb)# 輸出層wOut = variable([1024, len(captchaList) * captchaLen], 0.01)bOut = variable([len(captchaList) * captchaLen], 0.1)yConv = tf.matmul(hDropFc, wOut) + bOutreturn yConv def captcha2Text(image_list):"""驗(yàn)證碼圖片轉(zhuǎn)化為文本"""with tf.Session() as sess:saver.restore(sess, tf.train.latest_checkpoint('model/'))predict = tf.argmax(tf.reshape(yConv, [-1, CAPTCHA_LEN, len(CAPTCHA_LIST)]), 2)vector_list = sess.run(predict, feed_dict={x: image_list, keepProb: 1})vector_list = vector_list.tolist()text_list = [vec2Text(vector) for vector in vector_list]return text_list x = tf.placeholder(tf.float32, [None, CAPTCHA_HEIGHT * CAPTCHA_WIDTH]) keepProb = tf.placeholder(tf.float32) yConv = cnnGraph(x, keepProb, (CAPTCHA_HEIGHT, CAPTCHA_WIDTH)) saver = tf.train.Saver() def handler(environ, start_response):try:request_body_size = int(environ.get('CONTENT_LENGTH', 0))except (ValueError):request_body_size = 0requestBody = json.loads(environ['wsgi.input'].read(request_body_size).decode("utf-8"))imageName = randomStr(10)imagePath = "/tmp/" + imageNameprint("requestBody: ", requestBody)reqType = requestBody.get("type", None)if reqType == "get_captcha":genCaptchaTextImage(save=imagePath)with open(imagePath, 'rb') as f:data = base64.b64encode(f.read()).decode()return Response(start_response, {'image': data})if reqType == "get_text":# 圖片獲取print("Get pucture")imageData = base64.b64decode(requestBody["image"])with open(imagePath, 'wb') as f:f.write(imageData)# 開始預(yù)測(cè)img = Image.open(imageName)img = img.resize((160, 60), Image.ANTIALIAS)img = img.convert("RGB")img = np.asarray(img)image = convert2Gray(img)image = image.flatten() / 255return Response(start_response, {'result': captcha2Text([image])})

在這個(gè)函數(shù)部分,主要包括兩個(gè)接口:

? 獲取驗(yàn)證碼:用戶測(cè)試使用,生成驗(yàn)證碼
? 獲取驗(yàn)證碼識(shí)別結(jié)果:用戶識(shí)別使用,識(shí)別驗(yàn)證碼

這部分代碼,所需要的依賴內(nèi)容如下:

tensorflow==1.13.1 numpy==1.19.4 scipy==1.5.4 pillow==8.0.1 captcha==0.3

另外,為了更加簡(jiǎn)單的來(lái)體驗(yàn),提供測(cè)試頁(yè)面,測(cè)試頁(yè)面的后臺(tái)服務(wù)使用 Python Web Bottle 框架:

# -*- coding:utf-8 -*- import os import json from bottle import route, run, static_file, request import urllib.request url = "http://" + os.environ.get("url") @route('/') def index():return static_file("index.html", root='html/') @route('/get_captcha') def getCaptcha():data = json.dumps({"type": "get_captcha"}).encode("utf-8")reqAttr = urllib.request.Request(data=data, url=url)return urllib.request.urlopen(reqAttr).read().decode("utf-8") @route('/get_captcha_result', method='POST') def getCaptcha():data = json.dumps({"type": "get_text", "image": json.loads(request.body.read().decode("utf-8"))["image"]}).encode("utf-8")reqAttr = urllib.request.Request(data=data, url=url)return urllib.request.urlopen(reqAttr).read().decode("utf-8") run(host='0.0.0.0', debug=False, port=9000)

該后端服務(wù),所需依賴:

bottle==0.12.19

前端頁(yè)面代碼:

<!DOCTYPE html> <html lang="en"> <head><meta charset="UTF-8"><title>驗(yàn)證碼識(shí)別測(cè)試系統(tǒng)</title><link href="https://www.bootcss.com/p/layoutit/css/bootstrap-combined.min.css" rel="stylesheet"><script>var image = undefinedfunction getCaptcha() {const xmlhttp = window.XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject("Microsoft.XMLHTTP");xmlhttp.open("GET", '/get_captcha', false);xmlhttp.onreadystatechange = function () {if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {image = JSON.parse(xmlhttp.responseText).Response.imagedocument.getElementById("captcha").src = "data:image/png;base64," + imagedocument.getElementById("getResult").style.visibility = 'visible'}}xmlhttp.setRequestHeader("Content-type", "application/json");xmlhttp.send();}function getCaptchaResult() {const xmlhttp = window.XMLHttpRequest ? new XMLHttpRequest() : new ActiveXObject("Microsoft.XMLHTTP");xmlhttp.open("POST", '/get_captcha_result', false);xmlhttp.onreadystatechange = function () {if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {document.getElementById("result").innerText = "識(shí)別結(jié)果:" + JSON.parse(xmlhttp.responseText).Response.result}}xmlhttp.setRequestHeader("Content-type", "application/json");xmlhttp.send(JSON.stringify({"image": image}));}</script> </head> <body> <div class="container-fluid" style="margin-top: 10px"><div class="row-fluid"><div class="span12"><center><h3>驗(yàn)證碼識(shí)別測(cè)試系統(tǒng)</h3></center></div></div><div class="row-fluid"><div class="span2"></div><div class="span8"><center><img src="" id="captcha"/><br><br><p id="result"></p></center><fieldset><legend>操作:</legend><button class="btn" onclick="getCaptcha()">獲取驗(yàn)證碼</button><button class="btn" onclick="getCaptchaResult()" id="getResult" style="visibility: hidden">識(shí)別驗(yàn)證碼</button></fieldset></div><div class="span2"></div></div> </div> </body> </html>

準(zhǔn)備好代碼之后,開始編寫部署文件:

Global:Service:Name: ServerlessBookDescription: Serverless圖書案例Log: AutoNas: Auto ServerlessBookCaptchaDemo:Component: fcProvider: alibabaAccess: releaseExtends:deploy:- Hook: s install dockerPath: ./Pre: trueProperties:Region: cn-beijingService: ${Global.Service}Function:Name: serverless_captchaDescription: 驗(yàn)證碼識(shí)別CodeUri:Src: ./src/backendExcludes:- src/backend/.fun- src/backend/modelHandler: index.handlerEnvironment:- Key: PYTHONUSERBASEValue: /mnt/auto/.fun/pythonMemorySize: 3072Runtime: python3Timeout: 60Triggers:- Name: ImageAIType: HTTPParameters:AuthType: ANONYMOUSMethods:- GET- POST- PUTDomains:- Domain: Auto ServerlessBookCaptchaWebsiteDemo:Component: bottleProvider: alibabaAccess: releaseExtends:deploy:- Hook: pip3 install -r requirements.txt -t ./Path: ./src/websitePre: trueProperties:Region: cn-beijingCodeUri: ./src/websiteApp: index.pyEnvironment:- Key: urlValue: ${ServerlessBookCaptchaDemo.Output.Triggers[0].Domains[0]}Detail:Service: ${Global.Service}Function:Name: serverless_captcha_website

整體的目錄結(jié)構(gòu):

| - src # 項(xiàng)目目錄| | - backend # 項(xiàng)目后端,核心接口| | - index.py # 后端核心代碼| | - requirements.txt # 后端核心代碼依賴| | - website # 項(xiàng)目前端,便于測(cè)試使用| | - html # 項(xiàng)目前端頁(yè)面| | - index.html # 項(xiàng)目前端頁(yè)面| | - index.py # 項(xiàng)目前端的后臺(tái)服務(wù)(bottle框架)| | - requirements.txt # 項(xiàng)目前端的后臺(tái)服務(wù)依賴

完成之后,我們可以在項(xiàng)目目錄下,進(jìn)行項(xiàng)目的部署:

s deploy

部署完成之后,打開返回的頁(yè)面地址:

點(diǎn)擊獲取驗(yàn)證碼,即可在線生成一個(gè)驗(yàn)證碼:

此時(shí)點(diǎn)擊識(shí)別驗(yàn)證碼,即可進(jìn)行驗(yàn)證碼識(shí)別:

由于模型在訓(xùn)練的時(shí)候,填寫的目標(biāo)準(zhǔn)確率是 90%,所以可以認(rèn)為在海量同類型驗(yàn)證碼測(cè)試之后,整體的準(zhǔn)確率在 90% 左右。

總結(jié)

Serverless 發(fā)展迅速,通過 Serverless 做一個(gè)驗(yàn)證碼識(shí)別工具,我覺得這是一個(gè)非常酷的事情。在未來(lái)的數(shù)據(jù)采集等工作中,有一個(gè)優(yōu)美的驗(yàn)證碼識(shí)別工具是非常必要的。當(dāng)然驗(yàn)證碼種類很多,針對(duì)不同類型的驗(yàn)證碼識(shí)別,也是一項(xiàng)非常有挑戰(zhàn)性的工作。

總結(jié)

以上是生活随笔為你收集整理的如何通过 Serverless 轻松识别验证码?的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。