|
|
| Line 3: |
Line 3: |
| = Platform = | | = Platform = |
| <div class="toccolours mw-collapsible mw-collapsed expandable"> | | <div class="toccolours mw-collapsible mw-collapsed expandable"> |
| <div class="mw-collapsible-preview">Spark</div> | | <div class="mw-collapsible-preview">Apache Spark</div> |
| <div class="mw-collapsible-content"> | | <div class="mw-collapsible-content"> |
| Apache Spark is a good platform for batch based data processing as well as streaming based data processing. Advantage: | | Apache Spark is a good platform for batch based data processing as well as streaming based data processing. Advantage: |
Revision as of 09:35, 9 September 2024
Data Lake Knowledge Center
Platform
Apache Spark
Apache Spark is a good platform for batch based data processing as well as streaming based data processing. Advantage:
- Scalable
- Well supported (DataBricks is backing up this product)
- Well adopted
- Supported by many cloud providers (AWS EMR, Azure HDInsight , GCP Dataproc, oci dataflow)
- In stead of building your own data lake, you can use LakeHouse provided by databricks, they support AWS, Azure and GCP.
Data Ingestion
Always save a copy of raw data
When you do data ingestion, you want to save the raw data for the following reasons
- Your ingestion pipeline may have bugs, saving raw data allows you to fix bugs and re-populate the data
- Raw data may not meed the data quality and you may ignore it, in case you ignore it, keep the raw data allows you to check what kind of data quality problem they are, and sometimes you can inform the data producer to have it fixed.
- Raw data is owned by data source team and they have their own retention policy -- raw data is not always accessible afterwards.
Use data connectors to manage data ingestions
- Create highly reusable "data connectors" to manage the data ingestio
- An anti pattern is to create too many one time, custome written, poorly documented data ingestion code
Data Governance
Keep good structure of your data
- raw, sometime unstructured data:
- you stage the raw data (to be ingested) here, sometimes, these data can be unstructured.
- they are structured, e.g. in parquet format. They captured all the information you interested from raw data. They may orgnized well -- the purpose is to capture all raw information with minimum processing.
- well modeled, maybe around a subject model. (a fact table with bunch of dimension tables)
See also