spark匹配html字段,Apache Spark中的高效字符串匹配
我不會首先使用Spark,但如果你真的承諾特定的堆棧,你可以結合一堆ml變壓器來獲得最佳匹配。你需要Tokenizer(或split):
import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
NGram(例如3克)
import org.apache.spark.ml.feature.NGram
val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
Vectorizer(例如CountVectorizer或HashingTF):
import org.apache.spark.ml.feature.HashingTF
val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
和LSH:
import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}
// Increase numHashTables in practice.
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
與Pipeline
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))
適合于例如數據合并:
val query = Seq("Hello there 7l | real|y like Spark!").toDF("text")
val db = Seq(
"Hello there ! I really like Spark ??!",
"Can anyone suggest an efficient algorithm"
).toDF("text")
val model = pipeline.fit(db)
變換兩者:
val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
并加入
model.stages.last.asInstanceOf[MinHashLSHModel]
.approxSimilarityJoin(dbHashed, queryHashed, 0.75).show
+--------------------+--------------------+------------------+
| datasetA| datasetB| distCol|
+--------------------+--------------------+------------------+
|[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
+--------------------+--------------------+------------------+
總結
以上是生活随笔為你收集整理的spark匹配html字段,Apache Spark中的高效字符串匹配的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: vivo的android是什么手机图片,
- 下一篇: 云南省2021高考成绩查询时间,2021