spark UDAF
生活随笔
收集整理的這篇文章主要介紹了
spark UDAF
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
感謝我的同事 李震給我講解UDAF
網上找到的大部分都只有代碼,但是缺少講解,官網的的API有講解,但是看不太明白。我還是自己記錄一下吧,或許對其他人有幫助。
接下來以一個求幾何平均數的例子來說明如何實現一個自己的UDAF
?
首先需要導入這些包:
import org.apache.spark.sql.expressions.MutableAggregationBuffer import org.apache.spark.sql.expressions.UserDefinedAggregateFunction import org.apache.spark.sql.Row import org.apache.spark.sql.types._需要繼承實現這個抽象類 class GeometricMean extends UserDefinedAggregateFunction { // This is the input fields for your aggregate function.
就是需要輸入的列的類型,可以有多個列,多個列的寫法如下:
/*
StructType(StructField("slot",IntegerType) :: StructField("score",IntegerType)::Nil) */ override def inputSchema: org.apache.spark.sql.types.StructType = StructType(StructField("value", DoubleType) :: Nil)
存儲聚合結果的中間buffer // This is the internal fields you keep for computing your aggregate. override def bufferSchema: StructType = StructType( StructField("count", LongType) :: StructField("product", DoubleType) :: Nil ) // This is the output type of your aggregatation function.
返回結果的類型,比如這個集合平均數就是返回一個double值 override def dataType: DataType = DoubleType
是每次運行結果都過一樣,但是我也不太明白啊 override def deterministic: Boolean = true
初始化存儲聚合結果的buffer // This is the initial value for your buffer schema. override def initialize(buffer: MutableAggregationBuffer): Unit = { buffer(0) = 0L buffer(1) = 1.0 }
每次更新怎么更新,比如新來了一行,如何加入更新聚合的結果 // This is how to update your buffer schema given an input. override def update(buffer: MutableAggregationBuffer, input: Row): Unit = { buffer(0) = buffer.getAs[Long](0) + 1 buffer(1) = buffer.getAs[Double](1) * input.getAs[Double](0) }
spark會把數據劃分成多個塊,每個塊都會進行處理,然后把每個塊的結果進行合并處理 // This is how to merge two objects with the bufferSchema type. override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = { buffer1(
轉載于:https://www.cnblogs.com/earendil/p/8510680.html
總結
以上是生活随笔為你收集整理的spark UDAF的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: BZOJ 1293 [SCOI2009]
- 下一篇: flask中jinjia2模板引擎详解3