|
|
| Line 2: |
Line 2: |
| = Data Tiers = | | = Data Tiers = |
| <div class="toccolours mw-collapsible mw-collapsed expandable"> | | <div class="toccolours mw-collapsible mw-collapsed expandable"> |
| <div class="mw-collapsible-preview">Using RDBMS</div> | | <div class="mw-collapsible-preview">Data Tiers</div> |
| <div class="mw-collapsible-content"> | | <div class="mw-collapsible-content"> |
| {| class="wikitable grid mono section" | | {| class="wikitable grid mono section" |
Revision as of 19:00, 25 November 2025
Data Lake Knowledge Center
Data Tiers
Data Tiers
| Diagram
|
Description
|
|
|
Bronze Tier:
The purpose for bronze tier is to store data downloaded from external world into data lake so we can use all sort of tools inside data lake to further process it
- raw data
- no uniformed format, could be csv, JSON, AVRO, parquet, binary, anything
- could even be unstructured
- no data quality assurance
Silver Tier:
The purpose for bronze tier is to allow data ingestion application to sanitize data, verify the quality of the data
- Data quality is assurred
- Data may not be normalized. One table may use UTC for a timestamp column while aother table may use timestamp without timezone. No stadnardlization for column name.
- Data format is uniformed, usually it is stored as a format that is best fits the further ETL process, for example parquet.
Gold Tier:
Tables for star schema
- dimention tables and fact tables that forms star schema
Platinum Tier:
Various query result for specific business questions materized in tables
- Various query result for specific business questions materized in tables
- Tables may be replicated to a RDBMS for BI tool to access (sometime you can expose them directly, e.g. Spark Thrift Server)
|
ETL
ETL Flow
- 1: User pushes ETL code into ETL Code Repo
- 2: Airflow Scheduler trigger DAG (DAG is generated based on metadata)
- The ETL job is a task within an airflow DAG
- 3: ETL executor pulls code from ETL Code Repo into loacl disk
- 4: ETL executor uses dbt library to submit job to Apache Spark via JDBC interface (e.g. via Thrift Server)
- 5: Thrift Server take the SQL and pass it to Apache Spark to execute
BI Connection
Using MPP Engine
- BI Tool to access Gold Tier data and Platnium Tier data via JDBC interface exposed by a MPP engine
- Why? A MPP Engine provide better interactive SQL query speed than Spark Thrift Server
Using RDBMS
- Gold Tier data and Platnium Tier data are replicated to RDBMS, such as Oracle DB
- BI Tool to access Gold Tier data and Platnium Tier data via JDBC interface exposed by RDBMS
- This pattern does not work for very large datalake since Gold Tier and Platnium Tier are too large to be replicated to RDBMS