You will join Near, one of the fastest-growing Enterprise SaaS companies, and experience a true start-up culture with the freedom to experiment and innovate. At Near, we believe that great culture is not just about work; it’s work + life. We not only encourage our employees to dream big but also give them the freedom and the tools to do so.
Near is seeking a Data Engineer to join our R&D team as we scale globally. You will be a key member of the Research & Development team and will collaborate with your team members, product managers, data scientists, and data analysts to develop innovative data-driven products. This role requires you to be hands-on in writing code, ideating and developing innovative product features, extracting and building intelligence from data, developing models, and building pipelines along with the rest of the team.
A Day in the Life
- As part of the R&D team, work on research & development of innovative solutions to address new-age challenges within the broader Big Data management space.
- Collaborate with product managers, understand customer requirements, and suggest appropriate solutions from a technical development standpoint.
- Translate business requirements into executable steps from a technical development standpoint and independently execute them.
- Write and optimize code for maximum efficiency, and build reusable code and libraries for future use.
- Develop an understanding of different internal data sources and external data tools and use the best of them as needed.
- Conduct research and create intellectual property for the company that will benefit Near and its partners.
- Stay up to date on emerging technologies, trends, and skills.
- Incorporate customer feedback and build solutions to address market and customer needs.
- Synthesize both quantitative and qualitative data into insights that deepen our understanding of our product performance and user behavior.
What You Bring to the Role
- Bachelor’s or master’s degree in engineering from a reputed institute.
- 2 – 4 Years of experience.
- Should be proficient with Python, Apache Spark (Java, Scala, Python) and other Big Data Technologies, SQL, and NoSQL Databases (e.g., MongoDB).
- Familiarity with ML approaches, algorithms, and some experience in building/deploying models.
- Experience in optimizing data pipelines and optimizing queries / compute.
- Fundamentally strong problem-solving, analytical, and organizational skills with an eye for detail.
- Experience working with AWS cloud services.
- Ability to quickly pick up new tools and technologies from the modern tech/data stack.
- Experience in structured data analytics/BI/reporting/data visualization.
- Passion for learning new technologies, creating products and intellectual property.