Companies you’ll love to work for

178
companies
3,033
Jobs

Cloud Platform Engineer

Gabi

Gabi

Software Engineering
Hyderabad, Telangana, India
Posted on Oct 10, 2024

Company Description

Experian unlocks the power of data to create opportunities for consumers, businesses and society. During life’s big moments – from buying a home or car, to sending a child to college, to growing a business exponentially by connecting it with new customers – we empower consumers and our clients to manage data with confidence so they can maximize every opportunity. We gather, analyse and process data in ways others can’t. We help individuals take financial control and access financial services, businesses make smarter decision and thrive, lenders lend more responsibly, and organizations prevent identity fraud and crime. For more than 125 years, we’ve helped consumers and clients prosper, and economies and communities flourish – and we’re not done. Our 20,600 people in 43 countries believe the possibilities for you, and our world, are growing. We’re investing in new technologies, talented people and innovation so we can help create a better tomorrow.

Job Description

As a key aide to both the IT Infrastructure and Development teams, you will help support existing systems 24x7 and responsible for administering current Big Data environments. The candidate will be responsible for managing BigData Cluster environments and will work with teammates to maintain, optimize, develop, and integrate working solutions for our big data tech stack. To support the product development process in line with the product roadmap for product maintenance and enhancement such that the quality of software deliverables maintains excellent customer relationships and increases the customer base.

If you have the skills and “can do” attitude, we would love to talk to you!

What you’ll be doing

  • Responsible for assessing existing Cloudera infrastructure, data and applications to develop a migration strategy to Cloud native solutions.
  • Develop and maintain Terraform modules and Infrastructure as code (IAC) for provisining and managing AWS Resources.
  • Expert in software development lifecycle practices (branch/release strategies, peer review and merge practices), deliver innovative CI/CD solutions using emerging technology solutions.
  • Expert in Kubernetes, deploying EKS clusters and maintaining them.
  • Automating infrastructure and Big Data technologies deployment, build and configuration using DevOps tools.
  • Implement Security best practices for the big data platform (HBase, HDFS, KAFKA, HIVE..).
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Optimize EMR Clusters for performance and cost-efficiency.
  • Manage and review Hadoop log files, File system management and monitoring
  • HDFS support and maintenance
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups
  • Solid Understanding on premise and Cloud network architectures
  • Additional Hadoop skills like Sentry, Spark, Kafka, Oozie, etc
  • Advanced experience with AD/LDAP security integration with Cloudera, including Sentry and ACL configurations
  • Ability to configure and support API and OpenSource integrations
  • Experience working with DevOps environment, developing solutions utilizing Ansible, etc.
  • Will collaborate and communication with all levels of technical and senior business management
  • Will require on-call 24X7 support of production systems on a rotation basis with other team members
  • Pro-actively evaluate evolving technologies and recommend solutions to business problems.

Qualifications

  • Typically requires a bachelor's degree (in Computer Science or related field) or equivalent.
  • 5+ years of DevOps experience working in a multi-cloud environment(AWS preferrable)
  • 3+ years of Linux (Redhat) system administration
  • Hadoop infrastructure administration is a big plus
  • Cloud Platforms IaaS/PaaS – Cloud solutions: AWS, Azure, VMWare.
  • Automation skills – Ansible and Terraform.
  • Kerberos administration skills
  • Experience with Cloudera distribution, AWS EMR on EKS, EC2 & serverless
  • Experience in coding languages, especially Python
  • Good experience in CI/CD tools such as Jenkins, Puppet and Shell scripting
  • Must have knowledge on DevOps tools.
  • Working Knowledge of YARN, HBase, Hive, Spark, Kafka,Solr etc.
  • Strong Problem Solving and creative thinking skills
  • Effective oral and written communications
  • Experience working with geographically distributed teams
  • Bachelors or master’s degree in Computer Science or equivalent experience
  • Knowledge and understanding of the business strategy and use of back-office applications.
  • Ability to adapt to multi-lingual and multicultural environment, additional language skills are a bonus.
  • Ability to handle conflicting priorities.
  • Ability to learn.
  • Adaptability.
  • Receptive to change.
  • Ability to communicate with business users at all levels
  • Analytical skills

Additional Information

Experian Careers - Creating a better tomorrow together

Find out what its like to work for Experian by clicking here