Jump to content

Technology:Databricks

From Codex

Background

Friday Sept 27th decided to learn what databricks is

Setup

https://docs.databricks.com/en/getting-started/onboarding-account.html
Upload files to volume
processing with notebook
  1. create stack - done
  2. create compute resource - done / started serverless starter warehouse
  3. connect workspace to data sources
    • s3://.../374226360171826
  4. added volume/directory
    • test20/default/inventory/all-stock
    • 78 files I think, each about 65MB JSON
  5. Created Notebook
    • now I'm attempting to read the files, and perhaps I loaded too many... this is taking a while for 1 node to "setup"
    • referenced the volume properly, now waiting for the notebook to process the records
    • reference https://www.databricks.com/glossary/pyspark
    • 20 minutes fighting what spark thought was corrupt json, turns out this worked `df = spark.read.option("multiline", "true").json(file_path)`