Skip to main content

Text

The Text file type is:

  • Easy to read from, write to, and share.
  • Compatible with many programs, and easy to exchange data.

Parameters

ParameterTabDescription
LocationLocationFile path to read from or write to the Text file.
SchemaPropertiesSchema to apply on the loaded data.
In the Source gem, you can define or edit the schema visually or in JSON code.
In the Target gem, you can view the schema visually or as JSON code.

Source

The Source gem reads data from Text files and allows you to optionally specify the following additional properties.

Source properties

Property nameDescriptionDefault
DescriptionDescription of your dataset.None
Enforce schemaWhether to use the schema you define.true
Read file as single rowWhether to read each file from input path as a single row.false
Line SeparatorSets a separator for each field and value. The separator can be one or more characters.\r, \r\n, and \n
Recursive File LookupWhether to recursively load files and disable partition inferring. If the data source explicitly specifies the partitionSpec when therecursiveFileLookup is true, the Source gem throws an exception.false

Example

Generated Code

tip

To see the generated source code of your project, switch to the Code view in the project header.

def read_avro(spark: SparkSession) -> DataFrame:
return spark.read\
.format("text")\
.text("dbfs:/FileStore/customers.txt", wholetext = False, lineSep = "\n")


Target

The Target gem writes data to Text files and allows you to optionally specify the following additional properties.

Target properties

Property nameDescriptionDefault
DescriptionDescription of your dataset.None
Write ModeHow to handle existing data. For a list of the possible values, see Supported write modes.error
Partition ColumnsList of columns to partition the Text files by.
The Text file type only supports a single column apart from the partition columns. If the DataFrame contains more than one column apart from partition columns as the input DataFrame, the Target gem throws an AnalysisException error.
None
Compression CodecCompression codec when writing to the Text file.
The Text file supports the following codecs: none, bzip2, gzip, lz4, snappy and deflate.
None
Line SeparatorDefines the line separator to use for parsing.\n

Supported write modes

Write modeDescription
errorIf the data already exists, throw an exception.
overwriteIf the data already exists, overwrite the data with the contents of the DataFrame.
appendIf the data already exists, append the contents of the DataFrame.
ignoreIf the data already exists, do nothing with the contents of the DataFrame.
This is similar to the CREATE TABLE IF NOT EXISTS clause in SQL.

Example

Generated Code

tip

To see the generated source code of your project, switch to the Code view in the project header.

def write_text(spark: SparkSession, in0: DataFrame):
in0.write\
.format("text")\
.mode("overwrite")\
.text("dbfs:/FileStore/customers.txt", compression = "gzip", lineSep = "\n")