Job Title : Big Data Engineer_N07
Roles and Responsibilities :
- Develop and deploy BigData jobs (Informatica & Spark) for building Enterprise Data lake.
- Document bigdata use cases, solutions and recommendations.
- Help program and project managers in the design, planning and governance of implementing big data projects
- Performed detailed analysis of business problems and technical environments and use this in designing the solution
- Ensures adherence to architecture standards and best practices to maintain consistency across the enterprise landscape
- Ensure integrity & security of assigned data architecture
- Minimum 3 years’ experience in Big data platform – specially on Cloudera Hadoop ecosystem – HDFS, Hive, Hbase, Impala, Sqoop, Spark, Kafka
- Minimum 2 years’ experience in Informatica modules (Informatica 10.4 exp. would be preferred) – BDM/DEI, DES, PowerCenter, PowerExchange, CDC
- Strong scripting skills in Linux environment and SQL
- Experience in data modeling in Hadoop.
- Good experience with CI/CD & DevOps tools – Nexus, Jenkins, GitHub, SVN, JIRA
- Good to have knowledge on ‘Cloud datalake’ and ‘Visualization tools like Tableau’
- Good to have knowledge on Informatica EDC & Axon.
- Ability to prioritize and multi-task across numerous work streams.
- Strong interpersonal skills; ability to work on cross-functional teams. Strong verbal and written communication skills.
- Deep knowledge of best practices through relevant experience across data-related disciplines and technologies particularly for enterprise wide data architectures and data warehousing/BI