Ganga Nayak Latest Resume 2023-3
Ganga Nayak Latest Resume 2023-3
Self-motivated and hardworking individual with close to 4 years of hands-on experience in Big
Data Engineering Technologies like Spark, SparkSQL, Python, Pyspark, SQL and cloud
Technologies like GCP, Azure and Aws Looking forward for a challenging role as a Big Data
Engineer position where I can leverage my skills to create and implement data driven solutions
and grow professionally.
Work Experience
AWS Data Engineer
Legato Health Technology LLP 2022-11-23 – Present
• Project: IDL
Description: Working on enterprise Data Lake to provide the quality of data for variety use
case including analytics, processing, storing and reporting large volume of data, Responsible
for maintaining quality reference data in source by performing operation such as cleaning
transformation and ensuring integrity in a relational environment by working closely with
the stakeholder and solution architect By working on the AWS Big data cloud environment
services including EMR, Glue, Lambda, Step function to transform and move large amount
of data into and out of the other AWS data stores and data bases.
Analyst
Deloitte Touch Tohmatsu India LLP, Chennai, 2020-06-15 – 2022-11-18
• Project: IVL GBS
Description: Building stage, data warehouse and Bi layers, enriching the required matrices
for the BI Team for building the dashboard and implementing the business logic, Performing
ETL on AWS Glue, EMR and load the data in to stage, Dw, BI tables, Executing the SQL
scripts from S3 and automation of SQL scripts.
• Project: Unilever.
Description: Data processing and implementing the business logic using the Azure
data bricks cluster, Enriching the data from various resources from the various
business stake holders like Blob storage, Nielson, IRI, Spin and performing various
spark Business Logic and ingesting the data in to the PDS, in term connecting the
data to the SqlDw for analytical Purpose.
• Project Description:
Data Migration: Worked closely with internal teams to identify and define data conversion
requirements, Responsible for migrating the data from MySQL server to Google cloud
BigQuery. Analyzing existing conversion processes and identified the problems impacting
the quality of our data conversions and implemented with PYHTON, SQL. Managed end-
to-end complex data migration and ETL processes. Worked closely to the deadline, daily
targets.
• Spark: Building the infrastructure requirement for optimal extraction, transformation, and
loading of data from a wide variety of data sources using Apache Spark, SparkSQL, python
• Data Pipelines: Designed and developed complex data pipelines and maintained the data
quality to which will steam the batch, streaming data from Google cloud pub/sub to Google
BigQuery.
Technical Skills
• Platforms : Google Cloud, Microsoft Azure, AWS
• Databases : Oracle, Sql server, PostgreSQL
• Big Data Tools : Spark, Sqoop
• Programming Languages : Python, Pyspark, SparkSQL, SQL, Hive
• Cluster : Databricks
Education
• Master of Computer Applications: Computer Science 2016-2019 74%
Chaitanya Bharathi Institute Of Technologies ( CBIT ) – Hyderabad
Extra-curricular activities
• “Placement co-ordinator” for the year 2015-16 and 2018-19 in CBIT
• Participated in Volleyball, Hand Ball and Net Ball at “State level Games.”
• One of my Article was published in Nizam College-Osmania University magazine
“What Stops You To Reach Your Goal”
Hobbies
• Forming
• Agriculture