You will be joining Near, one of the fastest-growing Enterprise SaaS companies, and experience a true start-up culture with the freedom to experiment and innovate. At Near, we believe that great culture is not just about work; it’s work + life. We not only encourage our employees to dream big but also give them the freedom and the tools to do so.
This role provides an opportunity to be a part of the Research & Development team at Near. You will get exposure to working on a huge scale of data, and cutting-edge tech stack, and leverage your skill set to help us build a high-value and scalable product. You will be responsible for developing techniques to enhance data, You will need to collaborate with Data Scientists, SW Engineers, and UI Engineers and work as a part of a high-performance team and solve problems.
A Day in the Life
- Design & implement data exchange workflows with data in Lakehouse, operating on a large volume of data.
- Develop extension modules for Apache Spark - beyond UDFs.
- Design & develop modules with a focus on Privacy & Security - Privacy First Development.
- Actively participate in design, development, testing, and deployment - manage the end-to-end lifecycle of software development.
- Ensure that the platform is operating at its best performance & responsiveness.
- Continuous innovation - Innovating new & unique ways to solve known and unknown problems.
- Design and develop solutions that are scalable, generic, and reusable.
- Responsible for collecting, storing, processing, and analyzing huge sets of data that are coming from different sources.
- Develop techniques to analyze and enhance both structured/unstructured data and work with big data tools and frameworks.
- Design, build and support existing data pipelines to standardize, clean, and ingest data.
- Participate in product design and development activities supporting Near’s suite of products.
- Liaise with various stakeholders across teams to understand business requirements.
What You Bring to the Role
- You should hold a B.Tech/M.Tech degree.
- Skills: Experience in Scala/Java - 2-5 years of experience.
- Practical working experience Apache spark, Presto/Trino, HDFS - at least 2 years.
- Knowledge of the internals of Apache Spark is a plus.
- Experience working /operating in multi-cloud environments is a plus.