Datalake Connections
Datalake is a versatile framework for data orchestration, offering seamless integration with databases like MySQL and Elasticsearch. It simplifies data ingestion, processing, and transformation through pipelines, with a secure configuration using environment variables. This guide walks through the setup, connection management, and usage with step-by-step instructions.
1. Inheriting from Datalake
The DatalakeConnection class inherits from the Datalake class, gaining access to all the parent class's methods. These methods include:
Creating pipelines.
Adding connections.
Executing tasks.
This inheritance allows developers to leverage pre-built functionality while extending capabilities specific to their workflows.
2. Configuring Connections
Connections represent data sources like MySQL, Elasticsearch, and Redis. Configuration for each connection is fetched dynamically from a Config file or environment variables.
SQL Databases (e.g., MySQL, PostgreSQL): Identified with the
sqltype.Elasticsearch (ES): Identified with the
estype.Redis: Identified with the
redistype.
Each connection type is clearly categorized to ensure proper handling and integration into the pipeline.
3. Creating the Pipeline
A pipeline is a central structure for managing data workflows. The create_pipeline method from the Datalake class initializes a pipeline with a unique name.
The pipeline coordinates multiple connections and their respective tasks, ensuring a structured and cohesive execution process.
4. Adding Connections
Connections are added to the pipeline using unique identifiers. Each connection requires:
A name (e.g.,
sql_connection,es_connection).A configuration containing the necessary connection details (e.g., host, port, authentication credentials).
This modular approach allows for easy referencing and management of multiple data sources within the same pipeline.
5. Executing Connections
Once all the connections are added to the pipeline, the execute_all method is used to run them concurrently.
This method employs threading to ensure efficient execution across multiple connections.
Results from each connection are collected and can be processed further.
6. Storing Connections
All connections added to the pipeline are stored in a connections dictionary.
Each connection can be accessed by its unique identifier (e.g.,
sql_connection,es_connection).This storage mechanism simplifies reuse and modularity within complex workflows.
Here's the example code for execution and connections:
Step 1. Create the .env File
.env FileIn your project directory, create a .env file and add your configuration variables in KEY=VALUE format.
Step 2. Create config.py file
In your Config.py file, use the .env package to load the .env file and assign the configuration values.
Example config file
Step 3. Import Required Modules
The Datalake class from groclake.datalake provides the basic functionalities for managing data pipelines. The Config module contains the configuration details for MySQL, Elasticsearch (ES), and Redis.
Step 4. Define DatalakeConnection Class
The DatalakeConnection class extends Datalake and adds specific data connections to a pipeline.
Step 5. Intialize DatalakeConnection class
Here we create an instance of the DatalakeConnection class. When the class is instantiated, it automatically creates the pipeline, adds the connections (MySQL, Elasticsearch, Redis), and executes them concurrently.
Step 6. Accessing a Specific Connection
Example Response
Example use of MySQL connection
Example use of Elasticsearch connection
Example use of Redis connection
Example use of S3 Connection
Example use of MongoDB Connection
Example use of GCP Connection
Last updated