6+ years of experience in Developing and maintaining scalable software applications using Python and PySpark for large-scale data processing and analytics.
Requirements
- Developing and maintaining scalable software applications using Python and PySpark for large-scale data processing and analytics.
- Design and implement efficient Big Data solutions, including distributed data pipelines and batch/stream processing systems.
- Collaborate with cross-functional teams to define requirements, design architecture, and deliver high-quality software components.
- Ensure code quality through best practices such as unit testing, code reviews, and version control, while maintaining clear documentation.
- Monitor, troubleshoot, and optimize application performance, focusing on scalability, reliability, and efficient resource utilization in Big Data environments.
Benefits
- Health Insurance
- Dental Insurance
- Vision Insurance
- 401k Matching
- Paid Time Off
To apply for this job please visit virtusa.taleo.net.

Follow us on social media