Job Overview

Location
Noida, Uttar Pradesh
Job Type
Full Time
Date Posted
5 months ago

Additional Details

Job ID
26051
Job Views
60

Job Description

Responsibilities
•    Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) 
•    Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness.  
•    Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms.  
•    Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost 
•    Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. 
•    Build data pipelines by building ETL processes (Extract-Transform-Load) 
•    Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data.  
•    Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs 
•    Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements 
•    Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems 
•    Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security 
•    Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way 
•    Coordinate with release management, other supporting teams to deploy changes in production environment 

Qualifications we seek in you!
Minimum Qualifications
•    Experience in designing, implementing data pipelines, build data applications, data migration on AWS 
•    Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift 
•    Experience of Databricks will be added advantage 
•    Strong experience in Python and SQL 
•    Strong understanding of security principles and best practices for cloud-based environments.  
•    Experience with monitoring tools and implementing proactive measures to ensure system availability and performance.  
•    Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment.  
•    Strong communication and collaboration skills to work effectively with cross-functional teams.  

Preferred Qualifications/ Skills
•    Master’s Degree-Computer Science, Electronics, Electrical. 
•    AWS Data Engineering & Cloud certifications, Databricks certifications 
•    Experience of working with Oracle ERP 
•    Experience with multiple data integration technologies and cloud platforms  
•    Knowledge of Change & Incident Management process 

 




Qualification

Any Graduate

Experience Requirements

Fresher Experience

Location

Similar Jobs

Full Time

Vanguard

Data Analyst

Full Time

Stripe

Data Analyst

Full Time

Caterpillar

Data Scientist

Full Time

Cookies

This website uses cookies to ensure you get the best experience on our website. Cookie Policy

Accept