You will be joining Near, one of the fastest-growing Enterprise SaaS companies, and experience a true start-up culture with the freedom to experiment and innovate. At Near, we believe that great culture is not just about work; it’s work + life. We not only encourage our employees to dream big but also give them the freedom and the tools to do so.
This role provides an opportunity to be a part of the Research & Development team at Near. You will get exposure to work on a huge scale of data, and cutting-edge tech stack, and leverage your skills to help us build a high-value and scalable product. You will be responsible for developing techniques to enhance data, You will need to collaborate with Data Scientists, SW Engineers, and UI Engineers and work as a part of a high-performance team and solve problems.
At the moment, we operate on a hybrid model and are actively in search of candidates, preferably located in Bangalore.
A Day in the Life
- Design and implement our data processing pipelines for different kinds of data sources, formats, and content for the Near Platform. Working with huge Data Lakes, Data Warehouse and Data Marts are part of this challenging role.
- Design and develop solutions that are scalable, generic, and reusable.
- Responsible for collecting, storing, processing, and analyzing huge sets of data we receive from different sources.
- Develop techniques to analyze and enhance both structured/unstructured data and work with big data tools and frameworks.
- Collaborate closely with Data Scientists and Business Analysts to understand data and functional requirements.
- Design, build, and support existing data pipelines to standardize, clean, and ingest data.
- Participate in product design and development activities supporting Near’s suite of products.
- Liaise with various stakeholders across teams to understand business requirements.
What You Bring to the Role
- Should hold a Bachelor’s/master’s degree in computer science or a related field.
- Must have 3-5 years of experience with at least 1 year of experience in a data-driven company/platform.
- Prior experience with distributed data processing frameworks such as Apache Spark, Apache Flink, or Hadoop is a must.
- Demonstrated teamwork is crucial, along with a flexible mindset in approaching problem-solving using the right tools and technologies, collaboratively with the team.
- Proficiency in frameworks, and distributed systems, strong algorithmic skills, and knowledge of design patterns are expected.
- An in-depth understanding of big data technologies and NoSQL databases (e.g., Kafka, HBase, Spark, Cassandra, MongoDB) is necessary.
- Additional experience with the AWS cloud platform, Spring Boot, and API development is a valuable plus.
- Exceptional problem-solving and analytical abilities, coupled with organizational skills and meticulous attention to detail, are essential traits.