我只是想知道在Apache Spark中RDD和DataFrame (Spark 2.0.0 DataFrame只是数据集[行]的类型别名)之间的区别是什么?

你能把一个转换成另一个吗?


当前回答

Apache Spark - RDD, DataFrame和DataSet

Spark RDD –

RDD代表弹性分布式数据集。只读 记录的分区集合。RDD是最基本的数据结构 的火花。它允许程序员在内存中执行计算 采用容错方式的大型集群。因此,加快任务的速度。

星火数据帧 –

与RDD不同,数据被组织成命名列。比如一张表 在关系数据库中。的不可变分布式集合 数据。Spark中的DataFrame允许开发人员在上面强加一个结构 数据的分布式集合,允许更高层次的抽象。

Spark数据集-

Apache Spark中的数据集是DataFrame API的扩展 提供类型安全的面向对象编程接口。数据集 通过暴露表达式来利用Spark的Catalyst优化器 和数据字段到查询计划器。

其他回答

通过谷歌搜索“DataFrame definition”可以很好地定义一个DataFrame:

数据帧是一种表格,或者是一种二维的类似数组的结构 每一列包含对一个变量的测量,以及每一行 包含一个大小写。

因此,由于其表格格式,DataFrame具有额外的元数据,这允许Spark在最终查询上运行某些优化。

另一方面,RDD只是一个弹性分布式数据集(Resilient Distributed Dataset),它更像是一个数据黑箱,不能对其进行优化,因为可以对其执行的操作不受约束。

然而,你可以通过RDD方法从一个DataFrame到一个RDD,你也可以通过toDF方法从一个RDD到一个DataFrame(如果RDD是一个表格格式)

一般来说,由于内置的查询优化,建议尽可能使用DataFrame。

A DataFrame is an RDD that has a schema. You can think of it as a relational database table, in that each column has a name and a known type. The power of DataFrames comes from the fact that, when you create a DataFrame from a structured dataset (Json, Parquet..), Spark is able to infer a schema by making a pass over the entire (Json, Parquet..) dataset that's being loaded. Then, when calculating the execution plan, Spark, can use the schema and do substantially better computation optimizations. Note that DataFrame was called SchemaRDD before Spark v1.3.0

Dataframe是Row对象的RDD,每个对象代表一条记录。一个 Dataframe还知道它的行的模式(即数据字段)。虽然Dataframes 看起来像常规的rdd,它们内部以更有效的方式存储数据,利用它们的模式。此外,它们还提供了rdd上不可用的新操作,例如运行SQL查询的能力。数据帧可以从外部数据源、查询结果或常规rdd中创建。

参考文献:Zaharia M., et al。学习火花(O'Reilly, 2015)

从使用的角度来看,RDD vs DataFrame:

RDDs are amazing! as they give us all the flexibility to deal with almost any kind of data; unstructured, semi structured and structured data. As, lot of times data is not ready to be fit into a DataFrame, (even JSON), RDDs can be used to do preprocessing on the data so that it can fit in a dataframe. RDDs are core data abstraction in Spark. Not all transformations that are possible on RDD are possible on DataFrames, example subtract() is for RDD vs except() is for DataFrame. Since DataFrames are like a relational table, they follow strict rules when using set/relational theory transformations, for example if you wanted to union two dataframes the requirement is that both dfs have same number of columns and associated column datatypes. Column names can be different. These rules don't apply to RDDs. Here is a good tutorial explaining these facts. There are performance gains when using DataFrames as others have already explained in depth. Using DataFrames you don't need to pass the arbitrary function as you do when programming with RDDs. You need the SQLContext/HiveContext to program dataframes as they lie in SparkSQL area of spark eco-system, but for RDD you only need SparkContext/JavaSparkContext which lie in Spark Core libraries. You can create a df from a RDD if you can define a schema for it. You can also convert a df to rdd and rdd to df.

我希望这能有所帮助!

DataFrame相当于RDBMS中的表,也可以以类似于rdd中的“原生”分布式集合的方式进行操作。与rdd不同,dataframe跟踪模式并支持各种关系操作,从而实现更优化的执行。 每个DataFrame对象表示一个逻辑计划,但由于它们的“惰性”性质,直到用户调用特定的“输出操作”才会执行。