Une possibilité de fuite ?

What we’re looking for:

We’re looking for a Data Operations Engineer to help us build and maintain the data backbone and database infrastructure behind our cloud based environment services. You will work incredibly close with our Data Analytics team to help implement, support, and scale our Hadoop/HDFS and MongoDB infrastructure.
 
Here are the values we look for in a team member:
  • You believe in automating everything — configuration management isn’t just a good idea but a requirement
  • You prove your ideas with metrics and testing, no guesswork
  • You can build tools to both detect operational issues but would rather build tools to prevent them
  • You are dedicated to building a robust infrastructure free of single points of failure
 
Here’s what you need:
  • 3-5+ years of experience in operations and Linux systems engineering
  • Experience setting up and maintaining Hadoop/HDFS cluster in production
  • Experience with various Hadoop related tooling, particularly Spark, Pig, and Hive
  • You’ve built and scaled production MongoDB clusters
  • Strong skills in at least one systems programming language (Python, Ruby, Go, etc.)
  • Experience with Chef or Puppet (we use Chef)
And for bonus points:
  • You’ve worked with the Cloudera Distribution for Hadoop (CDH) stack
  • You’ve built and maintained RabbitMQ clusters in production
  • You’re a master of Chef
  • You’ve helped build out large scale production systems using Amazon web services
  • You’re creative, passionate, curious, and someone we can learn from

A propos de Rally Health

We're looking for a Data Operations Engineer to help us build and maintain the data backbone and database infrastructure behind our cloud based environment services. You will work incredibly close with our Data Analytics team to help implement, support, and scale our Hadoop/HDFS and MongoDB infrastructure.   Here are the values we look for in a team member: You believe in automating everything -- configuration management isn't just a good idea but a requirement You prove your ideas with metrics and testing, no guesswork You can build tools to both detect operational issues but would rather build tools to prevent them You are dedicated to building a robust infrastructure free of single points of failure   Here's what you need: 3-5+ years of experience in operations and Linux systems engineering Experience setting up and maintaining Hadoop/HDFS cluster in production Experience with various Hadoop related tooling, particularly Spark, Pig, and Hive You've built and scaled production MongoDB clusters Strong skills in at least one systems programming language (Python, Ruby, Go, etc.) Experience with Chef or Puppet (we use Chef) And for bonus points: You've worked with the Cloudera Distribution for Hadoop (CDH) stack You've built and maintained RabbitMQ clusters in production You're a master of Chef You've helped build out large scale production systems using Amazon web services You're creative, passionate, curious, and someone we can learn from

Jobs similaires