EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.
We are looking for a Data Software Engineer to join our team in Hungary. Learn more about our Data Practice .
Responsibilities
Develop and implement innovative analytical solutions leveraging Cloud Native, Big Data, and NoSQL technologies
Build and deploy Cloud, On-Premise, or Hybrid solutions using leading data frameworks
Collaborate with product and engineering teams to analyze requirements and support decision-making
Coordinate with architects, technical leads, and stakeholders in other functional groups
Conduct business problem and technical environment analyses to implement high-quality solutions
Participate in code reviews and validate solutions against adherence to best practices
Foster and sustain a culture of high-performance engineering
Document projects effectively
Requirements
2+ years of experience in Data Software Engineering
Coding experience with one of the following programming languages: Python/Java/Scala/Kotlin
Understanding of cloud environments with leading providers (AWS, Azure, GCP), including Storage, Compute, Networking, Identity and Security, NoSQL, RDBMS and Cubes, Big Data Processing, Queues and Stream Processing, Serverless, Data Analysis and Visualization, Machine Learning as a Service (e.g., SageMaker, TensorFlow)
Familiarity with cloud-native technologies such as Databricks, Azure Data Factory, AWS Glue, AWS EMR, Athena, GCP Dataproc, or GCP Dataflow
Familiarity with Big Data technologies like Spark Core, Spark SQL, Spark ML, Kafka, Kafka Connect, Airflow, NiFi, or StreamSets
Experience with Linux OS, configuring services, and scripting with basic shell commands, as well as understanding network fundamentals
Competency in SQL and relational algebra
Background in developing software solutions using Big Data technologies, including administration, monitoring, debugging, configuration management, and performance tuning
Knowledge of data ingestion pipelines, Data Warehousing, and Data Lakes concepts
Expertise in data modeling and development experience using modern Big Data components
Knowledge of designing scalable, available, and fault-tolerant systems
Understanding of CI/CD principles and best practices
Analytical problem-solving abilities paired with excellent interpersonal and communication skills
Data-driven mindset that reflects motivation, independence, and efficiency, thriving under pressure while prioritizing effectively
Flexibility to adapt to fast-paced (startup-like) agile environments
Knowledge of container and resource management systems such as Docker and Kubernetes
Proficiency in infrastructure troubleshooting, performance tuning, and resolving bottlenecks
Broad exposure to diverse business domains
English language proficiency (B2 level or higher)
Nice to have
Experience with the Snowflake platform
We offer
Dynamic, entrepreneurial corporate environment
Diverse multicultural, multi-functional, and multilingual work environment
Opportunities for personal and career growth in a progressive industry
Global scope, international projects
Widespread training and development opportunities
Unlimited access to LinkedIn learning solutions
Competitive salary and various benefits
Advanced wellbeing and CSR programs, recreation area
Do you know someone interested in starting a career in IT? Share our with them, where they can enhance their knowledge in various fields online, free of charge.