Job Overview:This position is responsible for designing, developing and maintaining data analytics platforms for running IT operational/security analytics. He/she would be interfacing with the stakeholders and teams to design, setup and monitor the environment. Would be exposed to different phases of software development and architecture and would have the opportunity to work on data processing pipeline and big data platform.Key Responsibilities:Design, implement, test and deploy data processing infrastructure by understanding the use casesResearch and assess the viability of new processing and data storage technologiesUnderstanding the business objectives and goals and design services that couple business logic with reusable components for future expansionMonitoring performance and advising any necessary infrastructure changes.Ability to communicate with stakeholders as well as engineers.Deliver on projects assigned. Typical Role definition:A seasoned, experienced professional with a full understanding of area of specialization. Resolves a wide range of issues in creative ways. .General knowledge of related disciplines.Strong competence with the various tools, procedures, programming languages used to accomplish the job. Usually works with minimal supervision, conferring with a supervisor on unusual matters.May be assisted by (and at times direct) less senior level employees. Assignments are broad in nature and need ingenuity and originality to solve.Contributes to moderately complex aspects of a project. May assist more junior staff members with aspects of their job.Works on problems of diverse scope where analysis of data requires evaluation of identifiable factors. May play a role in high-level projects that have an impact on the company’s future direction. work Experience:Experience with integration of data from multiple data sources, including processing a diversity of structured and unstructured dataStrong experience with Python, Scala or Java Has worked with SQL or NoSQL databases (such a MySQL, Cassandra, MongoDB)Experience with Hadoop based Big Data Lakes and ecosystem tools such as Spark, Scalding, MapReduceExperience in designing, deploying, and administrating enterprise class big data clusters (Kafka, NiFi, Storm, Spark, HDFS)Ability to reason about performance tradeoffs Skills & Competencies:Coaching and mentoringDetail orientedIn depth knowledge of big data platform and practicesAttention to detailDecision making/Self Starter