You will be joining Near, one of the fastest growing Enterprise SaaS companies and experience a true start-up culture with the freedom to experiment and innovate. At Near, we believe that great culture is not just about work; it’s work + life. We not only encourage our employees to dream big, but also give them the freedom and the tools to do so.
Near is looking for a DevOps Engineer to perform day-to-day activities and to support the company’s data centers, software and applications platforms that service the entire business. It is a demanding role that requires the candidate to be capable of working with cross-functional teams diagnosing complex issues on the various platforms.
The ideal candidate should have strong experience as DevOps/SRE and be able to support the business's sites, software, and applications. The candidate should also have superior troubleshooting skills and knowledge of monitoring and alerting mechanisms.
A Day in the Life
- Manage large scale production environment and mission critical infrastructure.
- Handle stability, automation, scalability, deployment, monitoring, alerting, security and ensure maximum availability of Near’s tech infrastructure.
- Manage distributed big data systems comprising of Kafka, Hadoop, Spark, Hive, Flink, Mongo DB, Elastic Search and other cloud services like S3, EMR.
- Work closely with Software and Big data Developers and other collaborating teams to ensure the infrastructure is capable of serving current and future needs.
- Set up monitoring systems, create and maintain run-books.
- Participate in 24x7 on-call support roles on a rotational basis as needed in future.
- Influence, create and contribute to the automation platform.
- Take complete ownership of assigned modules and execute them.
What You Bring to the Role
- Bachelor’s/Master’s degree.
- Overall, 4 – 6 years of experience in Development Operations (AWS, GCP, Azure or On-Prem).
- Strong understanding of Security, Transport and Application layer.
- Prior experience in setting up instances in the Data Center and Cloud environment, preferably AWS.
- Excellent knowledge of the Unix operating systems(CentOS preferred), and very good system troubleshooting skills.
- Experience working with Web, Internet & Load Balancing and Big data technologies.
- Good at the administration of Big Data ecosystems – Apache Spark, Kafka, Airflow and NoSQL databases like MongoDB.
- Good at monitoring tools such as Nagios, Graphite, cacti and Ganglia.
- Experience working with database technologies - Redis, MySQL, MongoDB.
- Must have experience in configuration management tools - Puppet, CHEF, fabric, and Ansible.
- Must have experience in any one of the programming languages - Python/Ruby/Perl – Python/Ruby is preferred.
- Experience with Software Engineering Lifecycle, and handling deployments on a large scale.
- Hands-on experience with Continuous Integration/Continuous Delivery tools like Jenkins/Nexus/Maven/Ant.
- Must have exceptional problem solving, analytical and organisation skills with a detail-oriented attitude.
- Curiosity to learn new/emerging tech/framework.