Manager – Software Engineering: 10+ yrs (Python / APIs / RabbitMQ / Celery / PostgreSQL / Docker)
A Little About Us
UniCourt is a leader in making court data more accessible and useful with our Legal Data as a Service (LDaaS). We provide real-time access to court data through our APIs and online app for business development and intelligence, litigation analytics, litigation tracking, case research, investigations, background checks, due diligence, compliance, underwriting, machine learning models, and process automation.
We provide access to court data from state and federal courts to a diverse list of clients, including Fortune 500 companies and AmLaw firms and industries such as legal, insurance, finance, investigations, government, education, nonprofits, and consumers.
UniCourt is a legal technology company focused on using technology to unlock the potential of legal data. We are based in both California and Mangalore, India and our team includes legal professionals, data scientists, physicists, computer engineers, and sales and marketing, professionals.
About the Job
We are looking for a Manager – Software Engineering (Data Extraction) to lead and scale UniCourt’s core data extraction and processing systems in the Legal Tech domain. The ideal candidate is a hands-on technical leader with strong expertise in Python-based backend systems, data pipelines, and automation frameworks. You will be responsible for driving innovation in large-scale data ingestion, transformation, and standardization while maintaining the highest levels of accuracy, reliability, and performance. This role requires close collaboration with Product, QA, and DevOps teams to ensure timely delivery, process consistency, and adherence to UniCourt’s values of innovation, transparency, and excellence in data quality.
Our company creates some of the world’s most cutting-edge software solutions in the legal industry. We solve difficult problems, work on innovative technology, and build world-class platforms for people and enterprises to interact with court records and other public data sets. With some of the best minds in the industry, we’re one of the most sought-after learning and career destinations in the world of legal tech. If you’re looking to work at a company with opportunities to forge your career path in technology, UniCourt is the right place for you. Our customers range from individuals who interact with court records a few hours in a month to enterprise clients who spend several hours every day on our SaaS platform.
Duties & Responsibilities
- 1. 𝐒𝐩𝐫𝐢𝐧𝐭 & 𝐑𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭
- a) Lead sprint planning, prioritization, and re-planning with Product and QA leadership.
- b) Ensure equitable resource allocation and optimize team productivity.
- c) Track sprint progress, manage interdependencies, and ensure timely delivery of data extraction milestones.
- 2. 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬 & 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐚𝐥 𝐃𝐞𝐬𝐢𝐠𝐧 𝐎𝐰𝐧𝐞𝐫𝐬𝐡𝐢𝐩
- a) Own and finalize High-Level Requirements Documents (HLRs) in collaboration with Product Managers, CTO, and data stakeholders.
- b) Create and manage Jira Epics, User Stories, and Tasks for data-related projects.
- c) Lead the creation of Functional Design Documents (FDDs) for data workflows, defining success metrics for extraction accuracy, coverage, and timeliness.
- d) Review Interface Design Documents (IDDs) related to API integrations and provide improvement feedback.
- 3. 𝐃𝐚𝐭𝐚 𝐐𝐮𝐚𝐥𝐢𝐭𝐲 & 𝐓𝐞𝐬𝐭𝐢𝐧𝐠 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧
- a) Review and approve QA test plans for data extraction, validation, and transformation workflows.
- b) Define CI/CD pipeline requirements ensuring that regression, integration, and data consistency tests block faulty deployments.
- c) Partner with QA and data engineering teams to identify recurring data issues, optimize extraction logic, and improve validation frameworks.
- d) Oversee release management processes, ensuring smooth rollouts and detailed release documentation.
- 4. 𝐃𝐞𝐥𝐢𝐯𝐞𝐫𝐲 𝐄𝐱𝐜𝐞𝐥𝐥𝐞𝐧𝐜𝐞
- a) Ensure that all data extraction projects are delivered on time, meeting quality and scalability standards.
- b) Communicate deviations in requirements or timelines proactively to stakeholders.
- c) Oversee hotfix and quick sprints to resolve critical data or pipeline issues efficiently.
- 5. 𝐈𝐧𝐜𝐢𝐝𝐞𝐧𝐭 & 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭
- a) Escalate and report critical data incidents with detailed root cause analysis and corrective actions.
- b) Maintain dashboards for tracking extraction performance, job failures, latency, and data freshness.
- c) Share periodic reports on system health, coverage, and performance with leadership.
- 6. 𝐏𝐫𝐨𝐜𝐞𝐬𝐬 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭 & 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞
- a) Define and enforce development and release processes tailored for data extraction pipelines.
- b) Establish checklists for requirements approval, design reviews, coding standards, testing, documentation, and release readiness.
- c) Keep data extraction architecture and process documentation up-to-date in Confluence or similar tools.
- 7. 𝐂𝐨𝐬𝐭 & 𝐓𝐨𝐨𝐥 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧
- a) Monitor cloud (AWS) costs related to crawling, storage, and data pipeline execution.
- b) Evaluate and implement new Python libraries, frameworks, or tools to improve data extraction accuracy and efficiency.
- c) Lead POCs to test and adopt scalable approaches using technologies like Scrapy, FastAPI, Airflow, or AWS Lambda.
- d) Track and prioritize initiatives impacting data quality and client deliverables.
- 8. 𝐓𝐞𝐚𝐦 𝐁𝐮𝐢𝐥𝐝𝐢𝐧𝐠 & 𝐇𝐢𝐫𝐢𝐧𝐠
- a) Build and structure the team with focus on scalability, performance, and automation.
- b) Collaborate with HR to drive recruitment for Python developers, data engineers, and automation specialists.
- c) Design level-specific technical assessments for Python and data pipeline skills and ensure structured feedback documentation.
- 9. 𝐂𝐨𝐚𝐜𝐡𝐢𝐧𝐠, 𝐌𝐞𝐧𝐭𝐨𝐫𝐢𝐧𝐠 & 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭
- a) Conduct regular 1:1s to guide team members on career growth, technical development, and performance improvements.
- b) Mentor engineers on advanced Python coding practices, data engineering standards, and architectural best practices.
- c) Identify and nurture high-potential employees and build succession plans for key roles.
- 10. 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 & 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐚𝐥 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠
- a) Apply strong analytical thinking to improve data accuracy, reduce processing time, and enhance maintainability.
- b) Drive initiatives that align with organizational KPIs and client impact metrics.
- c) Anticipate data-related risks and implement proactive solutions at scale.
- 11. 𝐋𝐞𝐚𝐝𝐞𝐫𝐬𝐡𝐢𝐩 & 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧
- a) Lead with empathy, integrity, and transparency.
- b) Encourage innovation and collaboration across cross-functional teams.
- c) Communicate clearly with stakeholders on goals, challenges, and project updates, maintaining alignment at all levels.
Qualifications
- 1. Bachelor’s or Master’s degree in Computer Science
Required Skills
- 1. 10+ years of software engineering experience, including 2–3 years in a leadership or management role.
- 2. Strong expertise in Python, with experience in frameworks like FastAPI, Django, Flask, or Scrapy.
- 3. Strong experience in containerising application with Docker to deploy on Kubernetes K8S
- 4. Hands-on experience with data extraction, web scraping, ETL pipelines, and data processing frameworks.
- 5. Proficiency with AWS (Lambda, EC2, S3, RDS, CloudWatch) and CI/CD tools.
- 6. Familiarity with Airflow, Kafka, or Celery for workflow orchestration and automation.
- 7. Strong understanding of Agile, SDLC, and DevOps principles.
- 8. Excellent communication, problem-solving, and leadership skills.
- 9. Experience in Legal Tech, Data Analytics, or SaaS environments is a strong plus.
Thank you for your application!
We will review your application and get back to you shortly.