0% found this document useful (0 votes)
183 views

Contact Work Experience: Data Engineer

Akshay Bhardwaj has over 6 years of experience in big data technologies including Spark, Scala, PySpark, Hive, and Hadoop. He currently works as a Data Engineer at IBM where he builds data pipelines using Spark and PySpark and maintains documentation for data flows and transformations. Previously, he worked as a Big Data Developer at XORIANT developing an in-house ETL tool using Spark, Hive, and Shell Script and as a Technology Engineer at Amdocs working on data extraction, processing, ingestion, and validation projects leveraging technologies like Oracle, MySQL, HDFS, Hive, and SparkSQL. He also has experience as a Programmer Analyst at Cognizant where he maintained

Uploaded by

Goden M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
183 views

Contact Work Experience: Data Engineer

Akshay Bhardwaj has over 6 years of experience in big data technologies including Spark, Scala, PySpark, Hive, and Hadoop. He currently works as a Data Engineer at IBM where he builds data pipelines using Spark and PySpark and maintains documentation for data flows and transformations. Previously, he worked as a Big Data Developer at XORIANT developing an in-house ETL tool using Spark, Hive, and Shell Script and as a Technology Engineer at Amdocs working on data extraction, processing, ingestion, and validation projects leveraging technologies like Oracle, MySQL, HDFS, Hive, and SparkSQL. He also has experience as a Programmer Analyst at Cognizant where he maintained

Uploaded by

Goden M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

CONTACT WORK EXPERIENCE

+91-8209466923
[email protected] Big Data Spark | Scala | PySpark | Hive based Module Developer
LinkedIn
Demonstrated expertise in Big data technologies stack. Sound base in designing,
Development and maintaining Big Data solutions, data pipelines and processing
EDUCATION framework. Experience in development on Big Data/Hadoop technologies of over 4 years
with overall 6.5 years of IT Experience.
B.Tech
Computer Science and Engg,
Uttarakhand technical University Data Engineer
Dehradun IBM / March 2020 – Present
2010-2014 Project: BASELIII DDEP
Tech Stack: Spark, Hive, PySpark, Azure DevOps, Control-M, Hbase
SKILLS
Big Data stack ● Building the Data Pipeline based on the transformed legacy models
Big Data Hadoop (Cloudera ● Data Processing with Spark Modules and Shell scripts used for execution
distribution) ● Transformations and Join modules written in PySpark
Spark ● Maintaining end to end documentation for data flow and transformations
Scala on Spark ● Workflow automation and job execution using Control-M
Hive ● Following Agile methodology and CI-CD pipeline for version control and code
Sqoop deployment
Hbase ● Analyzing and translating business requirements to technical requirements and
Unix-Shell Scripting architecture
Databases
MySql Big Data Developer
Oracle XORIANT / Oct 2019 – March 2020
Teradata Project: AI-EXIT
Schedulers Tech Stack: Hive, Spark SQL, Shell Script, Autosys, Oracle, Ab Initio -M,
Oozie
Control-M The AI Exit Project aims at developing an In-house ETL tool based on Spark and Hive for our
Autosys client. Currently the Client uses Ab-Initio for it’s ETL operations The AI-Exit team aims at
Code Repo/Version Control replacing this client dependency with this new tool.
Git
Bitbucket ● Building the Data flow according to the existing ETL Jobs.
Azure DevOps ● Data Processing with Spark Modules
● ETL Modules like Filter, transformation, Enrichment, Target Load are run at Back end
● Handling the Configuration entries and Metadata
Certifications ● Jobs Execution monitoring and Jobs automation using Autosys
Microsoft Certified: Azure ● Communication with onsite coordinators and clients on a regular basis.
Fundamentals
● Analyzing and translating business requirements to technical requirements and
Docker Essentials: A Developer
architecture
Introduction
Containers, K8s and Istio on IBM
Cloud Technology Engineer
Amdocs / May 2016 – Oct 2020
Project: Big Data Analytics (BDA) & Amdocs Data Hub (ADH)
Tech Stack: Oracle, MySQL, HDFS, Hive, Spark SQL, Shell Script, Oozie

● Data extraction from different servers in data lake


● Data processing using Python and Spark
● Data Ingestion in Hadoop data lake
● Data validation using Quality check
● Jobs Execution monitoring and Job automation using shell script
● Analyzing and translating business requirements to technical requirements and
architecture.
● Attending the production issues and fixing the same under the given SLA and also
given permanent solution for the same

Programmer Analyst
Cognizant / Sep 2014 – May 2016
Project: IPB Regulatory Reporting (IPB)

Had been an Indispensable team member of a team comprising of 3 people in Negative


Interest Rate at offshore as well as indispensable team member of 50 people in Total IPB team.
Application maintenance activities include large and small changes/enhancements to the
system, production support daily and weekly status update for the client, regular client
interaction and delivery schedule tracking

Achievements
● Received “ACE” Award twice for Delivering High and Consistent Performance
● Received “Pat on the Back” award for 4 Sprints out of 6 total Sprints.
● Received “SPOT” Award by Project Manager for analyzing and fixing critical issues
● Received many appreciations from Client side.

You might also like