我只是想知道在Apache Spark中RDD和DataFrame (Spark 2.0.0 DataFrame只是数据集[行]的类型别名)之间的区别是什么?

你能把一个转换成另一个吗?


当前回答

A DataFrame is an RDD that has a schema. You can think of it as a relational database table, in that each column has a name and a known type. The power of DataFrames comes from the fact that, when you create a DataFrame from a structured dataset (Json, Parquet..), Spark is able to infer a schema by making a pass over the entire (Json, Parquet..) dataset that's being loaded. Then, when calculating the execution plan, Spark, can use the schema and do substantially better computation optimizations. Note that DataFrame was called SchemaRDD before Spark v1.3.0

其他回答

A DataFrame is an RDD that has a schema. You can think of it as a relational database table, in that each column has a name and a known type. The power of DataFrames comes from the fact that, when you create a DataFrame from a structured dataset (Json, Parquet..), Spark is able to infer a schema by making a pass over the entire (Json, Parquet..) dataset that's being loaded. Then, when calculating the execution plan, Spark, can use the schema and do substantially better computation optimizations. Note that DataFrame was called SchemaRDD before Spark v1.3.0

简单地说,RDD是核心组件,而DataFrame是spark 1.30引入的API。

RDD

数据分区的集合,称为RDD。这些RDD必须遵循以下几个属性:

不可变的, 容错, 分布式的, 更多。

这里RDD是结构化的或非结构化的。

DataFrame

DataFrame是Scala、Java、Python和r中可用的API,它允许处理任何类型的结构化和半结构化数据。要定义DataFrame,一个被组织成命名列的分布式数据集合,称为DataFrame。您可以很容易地优化DataFrame中的rdd。 您可以使用DataFrame一次处理JSON数据,parquet数据,HiveQL数据。

val sampleRDD = sqlContext.jsonFile("hdfs://localhost:9000/jsondata.json")

val sample_DF = sampleRDD.toDF()

这里Sample_DF被认为是DataFrame。sampleRDD(原始数据)称为RDD。

大部分答案都是正确的,我只想补充一点

在Spark 2.0中,这两个API (DataFrame +DataSet)将统一为一个API。

统一DataFrame和Dataset:在Scala和Java中,DataFrame和Dataset是统一的,即DataFrame只是Dataset of Row的类型别名。在Python和R中,由于缺乏类型安全,DataFrame是主要的编程接口。”

数据集类似于rdd,但是,它们不使用Java序列化或Kryo,而是使用专门的Encoder来序列化对象,以便在网络上进行处理或传输。

Spark SQL支持两种将现有rdd转换为数据集的方法。第一种方法使用反射来推断包含特定类型对象的RDD的模式。这种基于反射的方法可以生成更简洁的代码,如果在编写Spark应用程序时已经知道模式,这种方法也能很好地工作。

创建数据集的第二种方法是通过编程接口,该接口允许您构造一个模式,然后将其应用于现有的RDD。虽然此方法更详细,但它允许您在运行时之前不知道列及其类型时构造数据集。

在这里你可以找到RDD tof数据帧对话的答案

如何将rdd对象转换为数据帧在火花

从使用的角度来看,RDD vs DataFrame:

RDDs are amazing! as they give us all the flexibility to deal with almost any kind of data; unstructured, semi structured and structured data. As, lot of times data is not ready to be fit into a DataFrame, (even JSON), RDDs can be used to do preprocessing on the data so that it can fit in a dataframe. RDDs are core data abstraction in Spark. Not all transformations that are possible on RDD are possible on DataFrames, example subtract() is for RDD vs except() is for DataFrame. Since DataFrames are like a relational table, they follow strict rules when using set/relational theory transformations, for example if you wanted to union two dataframes the requirement is that both dfs have same number of columns and associated column datatypes. Column names can be different. These rules don't apply to RDDs. Here is a good tutorial explaining these facts. There are performance gains when using DataFrames as others have already explained in depth. Using DataFrames you don't need to pass the arbitrary function as you do when programming with RDDs. You need the SQLContext/HiveContext to program dataframes as they lie in SparkSQL area of spark eco-system, but for RDD you only need SparkContext/JavaSparkContext which lie in Spark Core libraries. You can create a df from a RDD if you can define a schema for it. You can also convert a df to rdd and rdd to df.

我希望这能有所帮助!

DataFrame相当于RDBMS中的表,也可以以类似于rdd中的“原生”分布式集合的方式进行操作。与rdd不同,dataframe跟踪模式并支持各种关系操作,从而实现更优化的执行。 每个DataFrame对象表示一个逻辑计划,但由于它们的“惰性”性质,直到用户调用特定的“输出操作”才会执行。