Strength. Care. Growth
A1 Competence Delivery Center is a vital component of A1’s telecommunications business. Acting as an expertise hub, CDC is dedicated in delivering a full range of high-quality IT, network, financial and other services to support A1’s operations across all OpCos, independent of location.
Using the power of being OneGroup and leverage synergies, CDC enables transparency of resources, key skills and knowledge expansion and personal career growth opportunities’ enhancement, paired with job stability.
To prepare the structure for hosting the Competence Delivery Centers current job advertisement is posted.
This job can be performed by all countries within our A1 footprint.
Role Insights:
- Design, develop, and test scalable Big Data solutions for A1 (batch and near real-time).
- Build and own Airflow DAGs for orchestration, scheduling, and monitoring of data workflows.
- Develop, schedule, and optimize Databricks Jobs (primarily in Python) for data processing.
- Contribute to the construction, architecture, and governance of the central Data Lake.
- Optimize data flows and ensure end-to-end data quality, accuracy, and observability.
- Collaborate closely with Data Scientists and business stakeholders to deliver data products.
- Drive innovation by testing, comparing, and piloting new tools and technologies.
- Document solutions and follow best practices (version control, testing, code reviews).
What Makes You Unique:
- You have 3+ years of experience with Linux scripting and SQL.
- You bring strong Python skills and proven experience in designing and building data pipelines.
- You have hands-on experience with Apache Airflow, including DAG authoring, scheduling, and monitoring.
- You are experienced in creating and operating Databricks Jobs, including notebooks, clusters, and job orchestration.
- You have a background in Big Data platforms (e.g., Cloudera/Hadoop) and/or data warehousing.
- You are knowledgeable in batch and (near) real-time data processing patterns.
- You demonstrate great written and spoken English skills.
Nice to have
- You have experience with Spark and SQL.
- You have worked across both on-premises environments and at least one major cloud platform (Azure, AWS, or GCP).
- You are familiar with streaming technologies (e.g., Kafka), lakehouse concepts, and CI/CD for data.
Job code: P3
Job classification (AT): 11 - (Global Level)