Class

com.github.mrpowers.spark.daria.sql.SparkSessionExt

SparkSessionMethods

Related Doc: package SparkSessionExt

Permalink

implicit class SparkSessionMethods extends AnyRef

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SparkSessionMethods
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SparkSessionMethods(spark: SparkSession)

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def createDF[U, T <: Product](rowData: List[U], fields: List[T]): DataFrame

    Permalink

    Creates a DataFrame, similar to createDataFrame, but with better syntax spark-daria defined a createDF method that allows for the terse syntax of toDF and the control of createDataFrame.

    Creates a DataFrame, similar to createDataFrame, but with better syntax spark-daria defined a createDF method that allows for the terse syntax of toDF and the control of createDataFrame.

    spark.createDF( List( ("bob", 45), ("liz", 25), ("freeman", 32) ), List( ("name", StringType, true), ("age", IntegerType, false) ) )

    The createDF method can also be used with lists of Row and StructField objects.

    spark.createDF( List( Row("bob", 45), Row("liz", 25), Row("freeman", 32) ), List( StructField("name", StringType, true), StructField("age", IntegerType, false) ) )

  7. def createEmptyDF[T <: Product](fields: List[T]): DataFrame

    Permalink

    Creates an empty DataFrame given schema fields

    Creates an empty DataFrame given schema fields

    This is a handy fallback when you fail to read from a data source

    val schema = List(StructField("col1", IntegerType)) val df = Try { spark.read.parquet("non-existent-path") }.getOrElse(spark.createEmptyDf(schema))

  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  12. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  13. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  14. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  16. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  17. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  18. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  19. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  20. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  21. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from AnyRef

Inherited from Any

Ungrouped