为了在Scala和JavaAPI之间保持一定的一致性,在批处理和流的标准API中忽略了一些在Scala中允许高表达性的特性。
如果要 享受完整的Scala体验,您可以选择选择-intoExtensions(通过隐式转换来增强ScalaAPI)。
要使用所有可用的扩展,您可以为数据集API添加一个简单的import
import org.apache.flink.api.scala.extensions._
或Datastream API
import org.apache.flink.streaming.api.scala.extensions._
或者,您可以导入单独的Extensions a-là-carte 仅使用您更喜欢的那些。
接受部分函数
通常,DataSet和DataS treamAPI都不接受匿名模式匹配函数来解构元组、案例类或集合,如下所示:
val data: DataSet[(Int, String, Double)] = // [...] data.map {case (id, name, temperature) => // [...]// The previous line causes the following compilation error:// "The argument types of an anonymous function must be fully known. (SLS 8.5)" }
此扩展在DataSet和DataS treamScalaAPI中都引入了新的方法,这些方法在扩展API中具有一对一的对应关系。这些委托方法确实支持匿名模式匹配函数。
DataSet API
| Method | Original | Example |
|---|---|---|
| mapWith | map (DataSet) |
data.mapWith {case (_, value) => value.toString}
|
| mapPartitionWith | mapPartition (DataSet) |
data.mapPartitionWith {case head #:: _ => head}
|
| flatMapWith | flatMap (DataSet) |
data.flatMapWith {case (_, name, visitTimes) => visitTimes.map(name -> _)}
|
| filterWith | filter (DataSet) |
data.filterWith {case Train(_, isOnTime) => isOnTime}
|
| reduceWith | reduce (DataSet, GroupedDataSet) |
data.reduceWith {case ((_, amount1), (_, amount2)) => amount1 + amount2}
|
| reduceGroupWith | reduceGroup (GroupedDataSet) |
data.reduceGroupWith {case id #:: value #:: _ => id -> value}
|
| groupingBy | groupBy (DataSet) |
data.groupingBy {case (id, _, _) => id}
|
| sortGroupWith | sortGroup (GroupedDataSet) |
grouped.sortGroupWith(Order.ASCENDING) {case House(_, value) => value}
|
| combineGroupWith | combineGroup (GroupedDataSet) |
grouped.combineGroupWith {case header #:: amounts => amounts.sum}
|
| projecting | apply (JoinDataSet, CrossDataSet) |
data1.join(data2).whereClause(case (pk, _) => pk).isEqualTo(case (_, fk) => fk).projecting {case ((pk, tx), (products, fk)) => tx -> products}data1.cross(data2).projecting {case ((a, _), (_, b) => a -> b}
|
| projecting | apply (CoGroupDataSet) |
data1.coGroup(data2).whereClause(case (pk, _) => pk).isEqualTo(case (_, fk) => fk).projecting {case (head1 #:: _, head2 #:: _) => head1 -> head2}}
|
DataStream API
| Method | Original | Example |
|---|---|---|
| mapWith | map (DataStream) |
data.mapWith {case (_, value) => value.toString}
|
| mapPartitionWith | mapPartition (DataStream) |
data.mapPartitionWith {case head #:: _ => head}
|
| flatMapWith | flatMap (DataStream) |
data.flatMapWith {case (_, name, visits) => visits.map(name -> _)}
|
| filterWith | filter (DataStream) |
data.filterWith {case Train(_, isOnTime) => isOnTime}
|
| keyingBy | keyBy (DataStream) |
data.keyingBy {case (id, _, _) => id}
|
| mapWith | map (ConnectedDataStream) |
data.mapWith(map1 = case (_, value) => value.toString,map2 = case (_, _, value, _) => value + 1)
|
| flatMapWith | flatMap (ConnectedDataStream) |
data.flatMapWith(flatMap1 = case (_, json) => parse(json),flatMap2 = case (_, _, json, _) => parse(json))
|
| keyingBy | keyBy (ConnectedDataStream) |
data.keyingBy(key1 = case (_, timestamp) => timestamp,key2 = case (id, _, _) => id)
|
| reduceWith | reduce (KeyedStream, WindowedStream) |
data.reduceWith {case ((_, sum1), (_, sum2) => sum1 + sum2}
|
| foldWith | fold (KeyedStream, WindowedStream) |
data.foldWith(User(bought = 0)) {case (User(b), (_, items)) => User(b + items.size)}
|
| applyWith | apply (WindowedStream) |
data.applyWith(0)(foldFunction = case (sum, amount) => sum + amountwindowFunction = case (k, w, sum) => // [...] )
|
| projecting | apply (JoinedStream) |
data1.join(data2).whereClause(case (pk, _) => pk).isEqualTo(case (_, fk) => fk).projecting {case ((pk, tx), (products, fk)) => tx -> products}
|
有关每个方法的语义的详细信息,请参阅数据集和数据流API文档。
要专门使用此扩展名,您可以添加以下import:
import org.apache.flink.api.scala.extensions.acceptPartialFunctions
对于数据集扩展,
import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions
下面的片段展示了如何(与DataSet API)一起使用这些扩展方法的最小示例:
object Main {import org.apache.flink.api.scala.extensions._case class Point(x: Double, y: Double)def main(args: Array[String]): Unit = {val env = ExecutionEnvironment.getExecutionEnvironmentval ds = env.fromElements(Point(1, 2), Point(3, 4), Point(5, 6))ds.filterWith {case Point(x, _) => x > 1}.reduceWith {case (Point(x1, y1), (Point(x2, y2))) => Point(x1 + y1, x2 + y2)}.mapWith {case Point(x, y) => (x, y)}.flatMapWith {case (x, y) => Seq("x" -> x, "y" -> y)}.groupingBy {case (id, value) => id}}}
