Nirav (RID : 6v6vlw5za92l)

  • Big Data Engineer
  • Jaipur

Rate

₹ 253,000 (Monthly)

Experience

7 Years

Availability

Immediate

Work From

Any

Skills

Azure data LakeData BricksRDBMSSQL Server 2014/2016/2017/2019NoSQLData ModelingDimension ModelingAWS GlueSAP BOPython NumPyMicrosoft Azure.Net Node.JsRest API

Description

 

 

 

 

 

 

 

 

 

 


Nirav – 6+ Years - Big Data Engineer

SUMMARY:

 Big Data Engineer/ETL Developer/ Data Warehouse Engineer/ Data Analyst with 6+ year
demonstrated history of working in the US Healthcare software industry.

 Experience in SQL Server development, BI development, Data Modeling, ETL, Data
warehousing, Report development ( SSRS/ Crystal, Power BI) and support.

 Experience on Migrating SQL database to Azure data Lake, Azure data lake Analytics, Azure
SQL Database, Data Bricks and Azure SQL Data warehouse and Controlling and granting
database access and Migrating On premise databases to Azure Data lake store using Azure
Data factory.

 Experience in Developing Spark applications using Spark-SQL in Databricks for data
extraction, transformation and aggregation from multiple file formats for analyzing

 Experience in Database Design and development with Business Intelligence using SQL
Server 2014/2016, Integration Services (SSIS), DTS Packages. SQL Server Analysis Services
(SSAS) DAX, OLAP Cubes, Star Schema and Snowflake Schema

 Excellent communication skills with excellent work ethics and a proactive team player with
a positive attitude.

 Domain Knowledge of Finance, Logistics and Healthcare, Health insurance with HIPPA
compliances.

 Working Experience with CI/CD Pipeline.

 Strong skills in visualization tools Power BI, Confidential Excel formulas. Pivot Tables,
Charts and DAX Commands.

 Expertise in various phases of project life cycles (Design, Analysis, Implementation and
testing).

 Experience in Date Wear housing.

 Led database administration and database performance tuning efforts to provide
scalability and accessibility in a timely fashion, provide 24/7 availability of data, and solve









 End user reporting and accessibility problems




SKILL SET
 Database Skills:
 Designing database structures and tables
 Query tuning and optimization
 Developing functions, views, and stored procedures
 Relation Database Management
 SQL Server 2014/2016/2017/2019
 RDBMS
 NoSQL (Redis)
 Data Warehousing skills:
 Data Modeling (Conceptual, Logical and Physical)
 Dimension Modeling
 Data Modeling Using ER Studio
 Data warehouse/Data Mart Operations
 Master Data Management (MDM)
 SnowFlake Data Warehouse
 ETL Skills:
 ETL/Data Lineage Architecture and Mapping
 Data Cleansing and Validation
 Audit/Meta Process
 SSIS, Azure Data Factory, Azure Data Flow
 AWS Glue
 AirFlow
 Reporting Skills:
 SSRS, Crystal Report, SAP BO
 Precision BI
 Power BI Dashboard/Reports
 Python Skills:
 Data Engineering with Python (Panda, NumPy, SciPy, Scikit, Matplotlib, seaborn)
 Cloud Skills:
 Microsoft Azure
 Azure Blob Storage, Azure Data Lake
 Azure synapse analytics
 Scriptng Languages:
 .Net (C#, MVC API), Node.Js, HL7, Rest API, Unix/Linux shell scripting















EXPERIENCE

Nine Hertz
Role: Senior Big Data Consultant
Responsibilities:
 Analyze, design and build Modern data solutions using Azure PaaS service to support
visualization of data. Understand current Production state of application and
 determine the impact of new implementation on existing business processes
 Extract Transform and Load data from Sources Systems to Azure Data Storage services
using a combination of Azure Data Factory, T-SQL Spark SQL and U-SQL Azure Data Lake
Analytics Data Ingestion to one or more Azure Services (Azure Data Lake, Azure Storage,
Azure SQL, Azure DW) and processing the data in in Azure
 Databricks.
 Created Pipelines in ADF using Linked Services/Datasers/Pipeline/ to Extract. Transform
and load data from different sources like Azure SQL. Blob storage, Azure SQL Data
warehouse, write-back tool and backwards.
 Developed Spark applications using Pyspark and Spark-SQL for data extraction,
transformation and aggregation from multiple file formats for analyzing & transforming
 the data to uncover insights into the customer usage patterns
 Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark
databricks cluster.
 Experienced in performance tuning

Submit Query icon