You will be joining Near, one of the fastest growing Enterprise SaaS companies and experience a true start-up culture with the freedom to experiment and innovate. At Near, we believe that great culture is not just about work; it’s work + life. We not only encourage our employees to dream big, but also give them the freedom and the tools to do so.
This role provides an opportunity to be a part of the Data Engineering team at Near. You will get exposure to work on huge scale of data, cutting-edge tech stack, and leverage your skillset to help us build a high-value and scalable product. You will be responsible for developing techniques to enhance data, You will need to collaborate with Data Scientists, SW Engineers, and UI Engineers and work as a part of a high-performance team and solve problems.
A Day in the Life
- Design and implement our data processing pipelines for different kinds of data sources, formats and content for the Near Platform. Working with huge Data Lakes, Data Warehouse and Data Marts are part of this challenging role.
- Design and develop solutions which are scalable, generic and reusable.
- Responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.
- Develop techniques to analyze and enhance both structured/unstructured data and work with big data tools and frameworks.
- Collaborate closely with Data Scientists and Business Analysts to understand data and functional requirements.
- Design, build and support existing data pipelines to standardize, clean and ingest data.
- Participate in product design and development activities supporting Near’s suite of products.
- Liaise with various stakeholders across teams to understand business requirements.
What You Bring to the Role
- You should hold a B.Tech/M.Tech degree.
- You should have 3-6 years of experience with a minimum of 3 years working in any data driven company/platform. Competency in core java is a must.
- You should have worked with distributed data processing frameworks like Apache Spark, Apache Flink or Hadoop.
- You should be a team player and have an open mind to approach the problems to solve them in the right manner with the right set of tools and technologies by working with the team.
- You should have knowledge of frameworks & distributed systems, be good at algorithms, data structures, and design patterns.
- You should have in-depth understanding of big data technologies and NoSql databases (Kafka, HBase, Spark, Cassandra, MongoDb etc).
- Work experience with AWS cloud platform, Spring Boot and developing API will be a plus.
- You should have exceptional problem solving and analytical abilities, and organisation skills with an eye for detail.