DevOps Engineer

Job Category: Technology and IT
Job Type: Contract
Job Location: USA
Company Name: VySystems

Company Overview

Vy Systems, a part of the vy.ventures group, has been delivering exceptional technology consulting, solutions, and managed services across multiple countries since 2002. With over two decades of experience, we’ve developed a unique organizational DNA that blends intellectual acumen with emotional intelligence—striking a thoughtful balance between analytical skill, sound judgment, and human insight.

Our approach is rooted in a people-first culture that values passionate dialogue, open debate, and consensus-driven decision-making. These principles guide us in solving complex challenges and delivering consistent, high-quality service to our clients.

At the core of Vy Systems are values that emphasize transparency, trust, dependability, responsiveness, and conducting business with authenticity and integrity. These values not only define how we work—they shape the exceptional outcomes we deliver to every stakeholder we serve.

About the Job

We are seeking a skilled and motivated engineer with hands-on experience in modern data and cloud technologies to join our team. This role requires expertise in building and maintaining continuous integration and continuous deployment (CI/CD) pipelines, working with real-time data streaming platforms, and orchestrating cloud-native applications.

The ideal candidate will have practical experience in the following areas:

  • CI/CD (Continuous Integration and Continuous Deployment): Design, implement, and manage automated pipelines to streamline code integration, testing, and deployment. You’ll ensure rapid and reliable software delivery across development and production environments.

  • Apache Kafka: Develop and maintain robust data streaming solutions using Kafka. You’ll be responsible for handling high-throughput, real-time data pipelines that are critical for data processing and application performance.

  • Apache Spark: Utilize Spark for large-scale data processing and analytics. This includes writing efficient Spark jobs for batch and streaming workloads to transform and analyze data at scale.

  • ETL (Extract, Transform, Load): Design and manage ETL workflows to move and transform data between various sources and destinations. You will work on building scalable, fault-tolerant pipelines that support analytical and operational workloads.

  • Amazon Web Services (AWS): Leverage various AWS services to build scalable, secure, and cost-effective infrastructure. Experience with services like EC2, S3, RDS, Lambda, and IAM is expected.

  • Kubernetes: Deploy and manage containerized applications using Kubernetes. You’ll be involved in cluster management, scaling applications, and ensuring high availability in a cloud-native environment.

  • Docker: Build, deploy, and manage Docker containers to encapsulate application environments, ensuring consistency across development, testing, and production stages.

APPLY

Apply for this position

Allowed Type(s): .pdf, .doc, .docx