58 Ofertas de Hadoop en Mexico
Senior (Big) Data Engineer
Publicado hace 7 días
Trabajo visto
Descripción Del Trabajo
Key responsibilities
+ Architect, design, and optimize scalable big data solutions for batch and real-time processing.
+ Develop and maintain ETL/ELT pipelines to ingest, transform, and synchronize data from diverse sources.
+ Integrate data from cloud applications, on-prem systems, APIs, and streaming workspaces into centralized data repositories.
+ Implement and manage **data lakes** and **data warehouses** solutions on cloud infrastructure.
+ Ensure **data consistency, quality, and compliance** with governance and security standards.
+ Collaborate with data architects, data engineers, and business stakeholders to align integration solutions with organizational needs.
Core qualifications
+ Proficiency in **Python, Java, or Scala** for big data processing.
+ **Big Data Frameworks:** Strong expertise in **Apache Spark** , Hadoop, Hive, Flink, or Kafka.
+ Hands-on experience with data modeling, data lakes ( **Delta Lake** , Iceberg, Hudi), and data warehouses ( **Snowflake** , Redshift, BigQuery).
+ **ETL/ELT Development: Expertise with tools like Informatica, Talend, SSIS, Apache NiFi, dbt, or custom Python-based frameworks.**
+ **APIs & Integration: Strong hands-on experience with REST, SOAP, GraphQL APIs, and integration platforms (MuleSoft, Dell Boomi, SnapLogic).**
+ **Data Pipelines: Proficiency in batch and real-time integration (Kafka, AWS Kinesis/ Azure Event Hub/ GCP Pub/Sub).**
+ **Databases: Deep knowledge of SQL (Oracle, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) systems.**
Preferred experience
+ Expertise with at least one major cloud platform (AWS, Azure, GCP).
+ Experience with data services such as AWS EMR/Glue, GCP Dataflow/Dataproc, or Azure Data Factory.
+ Familiarity with containerization (Docker) and orchestration (Kubernetes).
+ Knowledge of CI/CD pipelines for data engineering.
+ Experience with OCI and Oracle Database (including JSON/REST, sharding) and/or Oracle microservices tooling.
How we'll assess
+ Systems design interview: architect a scalable service; justify data models, caching, and failure handling.
+ Coding exercise: implement and optimize a core algorithm/data‑structure problem; discuss trade‑offs.
+ Code review: evaluate readability, testing, error handling, and security considerations.
+ Practical discussion: walk through a past end‑to‑end project, metrics/SLOs, incidents, and learnings.
Career Level - IC3
**About Us**
As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
¿Este trabajo es un acierto o un fracaso?
Senior AI/Big Data Engineer
Publicado hace 2 días
Trabajo visto
Descripción Del Trabajo
Key responsibilities
+ Architect, design, and optimize scalable big data solutions for batch and real-time processing.
+ Develop and maintain ETL/ELT pipelines to ingest, transform, and synchronize data from diverse sources.
+ Integrate data from cloud applications, on-prem systems, APIs, and streaming workspaces into centralized data repositories.
+ Implement and manage **data lakes** and **data warehouses** solutions on cloud infrastructure.
+ Ensure **data consistency, quality, and compliance** with governance and security standards.
+ Collaborate with data architects, data engineers, and business stakeholders to align integration solutions with organizational needs.
Core qualifications
+ Proficiency in **Python, Java, or Scala** for big data processing.
+ **Big Data Frameworks:** Strong expertise in **Apache Spark** , Hadoop, Hive, Flink, or Kafka.
+ Hands-on experience with data modeling, data lakes ( **Delta Lake** , Iceberg, Hudi), and data warehouses ( **Snowflake** , Redshift, BigQuery).
+ **ETL/ELT Development: Expertise with tools like Informatica, Talend, SSIS, Apache NiFi, dbt, or custom Python-based frameworks.**
+ **APIs & Integration: Strong hands-on experience with REST, SOAP, GraphQL APIs, and integration platforms (MuleSoft, Dell Boomi, SnapLogic).**
+ **Data Pipelines: Proficiency in batch and real-time integration (Kafka, AWS Kinesis/ Azure Event Hub/ GCP Pub/Sub).**
+ **Databases: Deep knowledge of SQL (Oracle, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) systems.**
Preferred experience
+ Expertise with at least one major cloud platform (AWS, Azure, GCP).
+ Experience with data services such as AWS EMR/Glue, GCP Dataflow/Dataproc, or Azure Data Factory.
+ Familiarity with containerization (Docker) and orchestration (Kubernetes).
+ Knowledge of CI/CD pipelines for data engineering.
+ Experience with OCI and Oracle Database (including JSON/REST, sharding) and/or Oracle microservices tooling.
How we'll assess
+ Systems design interview: architect a scalable service; justify data models, caching, and failure handling.
+ Coding exercise: implement and optimize a core algorithm/data‑structure problem; discuss trade‑offs.
+ Code review: evaluate readability, testing, error handling, and security considerations.
+ Practical discussion: walk through a past end‑to‑end project, metrics/SLOs, incidents, and learnings.
Career Level - IC3
**About Us**
As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
¿Este trabajo es un acierto o un fracaso?
Senior AI/Big Data Engineer
Publicado hace 2 días
Trabajo visto
Descripción Del Trabajo
We are seeking a highly skilled **Senior Big Data Engineer** to design, develop, and manage enterprise-grade data integration solutions. The ideal candidate will have extensive experience with ETL/ELT processes, API-driven integrations, and enterprise data platforms.
Key responsibilities
+ Architect, design, and optimize scalable big data solutions for batch and real-time processing.
+ Develop and maintain ETL/ELT pipelines to ingest, transform, and synchronize data from diverse sources.
+ Integrate data from cloud applications, on-prem systems, APIs, and streaming workspaces into centralized data repositories.
+ Implement and manage **data lakes** and **data warehouses** solutions on cloud infrastructure.
+ Ensure **data consistency, quality, and compliance** with governance and security standards.
+ Collaborate with data architects, data engineers, and business stakeholders to align integration solutions with organizational needs.
Core qualifications
+ Proficiency in **Python, Java, or Scala** for big data processing.
+ **Big Data Frameworks:** Strong expertise in **Apache Spark** , Hadoop, Hive, Flink, or Kafka.
+ Hands-on experience with data modeling, data lakes ( **Delta Lake** , Iceberg, Hudi), and data warehouses ( **Snowflake** , Redshift, BigQuery).
+ **ETL/ELT Development: Expertise with tools like Informatica, Talend, SSIS, Apache NiFi, dbt, or custom Python-based frameworks.**
+ **APIs & Integration: Strong hands-on experience with REST, SOAP, GraphQL APIs, and integration platforms (MuleSoft, Dell Boomi, SnapLogic).**
+ **Data Pipelines: Proficiency in batch and real-time integration (Kafka, AWS Kinesis/ Azure Event Hub/ GCP Pub/Sub).**
+ **Databases: Deep knowledge of SQL (Oracle, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) systems.**
Preferred experience
+ Expertise with at least one major cloud platform (AWS, Azure, GCP).
+ Experience with data services such as AWS EMR/Glue, GCP Dataflow/Dataproc, or Azure Data Factory.
+ Familiarity with containerization (Docker) and orchestration (Kubernetes).
+ Knowledge of CI/CD pipelines for data engineering.
+ Experience with OCI and Oracle Database (including JSON/REST, sharding) and/or Oracle microservices tooling.
How we'll assess
+ Systems design interview: architect a scalable service; justify data models, caching, and failure handling.
+ Coding exercise: implement and optimize a core algorithm/data‑structure problem; discuss trade‑offs.
+ Code review: evaluate readability, testing, error handling, and security considerations.
+ Practical discussion: walk through a past end‑to‑end project, metrics/SLOs, incidents, and learnings.
Career Level - IC4
**About Us**
As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
¿Este trabajo es un acierto o un fracaso?
Senior AI/Big Data Engineer
Publicado hace 2 días
Trabajo visto
Descripción Del Trabajo
We are seeking a highly skilled **Senior Big Data Engineer** to design, develop, and manage enterprise-grade data integration solutions. The ideal candidate will have extensive experience with ETL/ELT processes, API-driven integrations, and enterprise data platforms.
Key responsibilities
+ Architect, design, and optimize scalable big data solutions for batch and real-time processing.
+ Develop and maintain ETL/ELT pipelines to ingest, transform, and synchronize data from diverse sources.
+ Integrate data from cloud applications, on-prem systems, APIs, and streaming workspaces into centralized data repositories.
+ Implement and manage **data lakes** and **data warehouses** solutions on cloud infrastructure.
+ Ensure **data consistency, quality, and compliance** with governance and security standards.
+ Collaborate with data architects, data engineers, and business stakeholders to align integration solutions with organizational needs.
Core qualifications
+ Proficiency in **Python, Java, or Scala** for big data processing.
+ **Big Data Frameworks:** Strong expertise in **Apache Spark** , Hadoop, Hive, Flink, or Kafka.
+ Hands-on experience with data modeling, data lakes ( **Delta Lake** , Iceberg, Hudi), and data warehouses ( **Snowflake** , Redshift, BigQuery).
+ **ETL/ELT Development: Expertise with tools like Informatica, Talend, SSIS, Apache NiFi, dbt, or custom Python-based frameworks.**
+ **APIs & Integration: Strong hands-on experience with REST, SOAP, GraphQL APIs, and integration platforms (MuleSoft, Dell Boomi, SnapLogic).**
+ **Data Pipelines: Proficiency in batch and real-time integration (Kafka, AWS Kinesis/ Azure Event Hub/ GCP Pub/Sub).**
+ **Databases: Deep knowledge of SQL (Oracle, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) systems.**
Preferred experience
+ Expertise with at least one major cloud platform (AWS, Azure, GCP).
+ Experience with data services such as AWS EMR/Glue, GCP Dataflow/Dataproc, or Azure Data Factory.
+ Familiarity with containerization (Docker) and orchestration (Kubernetes).
+ Knowledge of CI/CD pipelines for data engineering.
+ Experience with OCI and Oracle Database (including JSON/REST, sharding) and/or Oracle microservices tooling.
How we'll assess
+ Systems design interview: architect a scalable service; justify data models, caching, and failure handling.
+ Coding exercise: implement and optimize a core algorithm/data‑structure problem; discuss trade‑offs.
+ Code review: evaluate readability, testing, error handling, and security considerations.
+ Practical discussion: walk through a past end‑to‑end project, metrics/SLOs, incidents, and learnings.
Career Level - IC4
**About Us**
As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling in the United States.
Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
¿Este trabajo es un acierto o un fracaso?
Data Engineer
Publicado hace 14 días
Trabajo visto
Descripción Del Trabajo
Nos encontramos en la búsqueda de:
**Ingeniero de Datos**
Propósito principal.
Desarrollar y optimizar flujos de trabajo de datos como parte de un ecosistema de información, buscando crear soluciones de datos confiables y eficientes que respalden la generación de informes empresariales, análisis de autoservicio e incentivar iniciativas de aprendizaje automático.
**Recompensas totales:**
Beneficios superiores a la ley.
**Requisitos:**
+ Licenciatura en Ciencias de la Computación, Ciencias de Datos, Sistemas de Información o carrera a fin.
+ 4 a 5 años de experiencia profesional en ingeniería de datos, ingeniería analítica o desarrollo de plataformas de datos.
+ Experiencia práctica con herramientas y plataformas de análisis como Snowflake o informática Cloud.
+ Experto en SQL, codificación SnowSql, ETL y almacenamiento de datos.
+ Experiencia en arquitectura conceptual, diseño de integración de datos y wireframing.
+ Familiaridad con entornos tecnológicos DevOps y Agile (preferible).
+ Inglés (nivel avanzado).
Conocimientos:
+ Plataformas de IA/ML como Databricks, SageMaker, AutoML o TensorFlow.
+ Integración de datos y herramientas ELT como Informatica Cloud, Fivetran o Azure Data Factory.
+ Herramientas de BI: Power BI, Tableau o MicroStrategy.
+ Trabajar con SAP ERP como fuente de datos.
+ Plataformas en la nube como Azure, AWS o GCP, especialmente servicios de almacenamiento, computación y orquestación.
Habilidades/Competencias:
+ Pensamiento analítico y estratégico.
+ Comunicación ejecutiva.
+ Orientación a resultados y resolución de problemas.
+ Adaptabilidad y visión de negocio.
**Descripción de la empresa:**
Te invitamos a formar parte de Grupo Peñafiel una empresa de Keurig Dr Pepper (KDP). KDP es una empresa líder en la producción y distribución de bebidas frías y calientes, para satisfacer todas las necesidades de consumo en cualquier momento y lugar, con operaciones en Estados Unidos, Canadá, México, Europa y Asia. Contamos con más de 29,000 mil empleados, más de 150 centros de distribución y 30 plantas.
En México operamos bajo el nombre de Grupo Peñafiel, haciendo referencia a nuestra marca más representativa para los consumidores mexicanos, con más de 70 años de historia. Contamos con un fuerte portafolio de bebidas carbonatadas y no carbonatadas con marcas líderes como: Peñafiel Agua Mineral, Naranjada, Limonada, Peñafiel Sabores, Clamato, Squirt entre otras
¿Te gustaría ser parte de este gran equipo?
Keurig Dr Pepper is an equal opportunity employer and affirmatively seeks diversity in its workforce. Keurig Dr Pepper recruits qualified applicants and advances in employment its employees without regard to race, color, religion, gender, sexual orientation, gender identity, gender expression, age, disability or association with a person with a disability, medical condition, genetic information, ethnic or national origin, marital status, veteran status, or any other status protected by law.
¿Este trabajo es un acierto o un fracaso?
Data Engineer
Publicado hace 19 días
Trabajo visto
Descripción Del Trabajo
ABOUT YOU:To succeed on this team, you should have hands-on experience with cloud platforms (ideally Google Cloud Platform), enjoy working with data pipelines, and be comfortable writing clean SQL and Python code. You're detail-oriented, collaborative, and eager to grow your skills in data engineering.
YOUR DAY-DAY: (aka Responsibilities)- Support the design and development of big data ETL pipelines using GCP services, DBT, Python, and SQL.- Help build and maintain data ingestion processes landing data into Google Storage, BigQuery, and Snowflake.- Assist in maintaining and improving the analytics data warehouse on BigQuery and Snowflake.- Collaborate with engineers and analysts to ensure data quality, consistency, and availability.- Troubleshoot and resolve issues in data pipelines and workflows.
YOU HAVE:(Required Qualifications, Skills, and Experience)- 2-4 years of experience in data engineering or a related field.- Strong SQL skills and proficiency in Python.- Working knowledge of cloud data environments (preferably GCP).- Familiarity with tools like DBT, Airflow/Composer, or similar.- Basic understanding of data warehouse concepts (relational and dimensional modeling).- Good problem-solving and troubleshooting skills.- Experience with Git and version control.- Bachelor's degree in Computer Science, Engineering, or a related field.- Professional proficiency in English and Spanish.
Desired Qualifications, Skills, and Experience- Hands-on experience with Snowflake, BigQuery, and Pub/Sub.- Exposure to CI/CD workflows and deployment practices.- Interest in adapting quickly to new technologies.- Strong communication and collaboration skills.- Knowledge of or interest in Univision/ViX content is a plus!
ELIGIBILITY REQUIREMENTS- Employment/education will be verified.- Applicants must be currently authorized to work in Mexico on a full-time basis
Univision es un empleador que ofrece equidad e igualdad de oportunidades. Todos los solicitantes calificados recibirán consideración para el empleo sin distinción de sexo, identidad de género, orientación sexual, raza, color, religión, origen nacional, discapacidad, estado de veterano protegido, edad o cualquier otra característica protegida por la ley.
About TelevisaUnivision
TelevisaUnivision is the world's leading Spanish-language media company. Powered by the largest library of owned Spanish-language content and a prolific production capability, TelevisaUnivision is the top producer of original content in Spanish across news, sports and entertainment verticals. This original content powers all of TelevisaUnivision's platforms, which include market-leading broadcast networks Univision, Las Estrellas, Canal 5 and UniMás, and a portfolio of 38 cable networks, which include TUDN, Galavisión, Distrito Comedia and TL Novelas. The company also operates the leading Mexican movie studio, Videocine, and owns and operates the largest Spanish-language audio platform in the U.S. across 35 terrestrial stations and the Uforia digital platform. TelevisaUnivision is also the owner of ViX, the largest Spanish-language streaming platform in the world. For more information, please visit televisaunivision.com .
Sobre TelevisaUnivision
TelevisaUnivision es la compañía líder de medios en español en el mundo. Impulsada por la biblioteca propia más grande de contenido en español y una alta capacidad de producción, TelevisaUnivision es el más grande productor de contenido para las verticales de noticias, deportes y entretenimiento. Este contenido original es lo que impulsa las plataformas de TelevisaUnivision, que incluyen las cadenas de transmisión líderes Univision, las estrellas, Canal 5, y UniMás, y un portafolio de 38 canales de televisión de paga, que incluyen TUDN, Galavisión, Distrito Comedia, y TL Novelas. La compañía además opera el estudio de producción cinematográfica líder en México, Videocine, y posee y administra las plataformas más grandes de audio en español en Estados Unidos en 35 estaciones y la plataforma digital Uforia. TelevisaUnivision también es propietaria de ViX, el servicio de streaming en español más grande del mundo. Para más información, por favor visita televisaunivision.com .
¿Este trabajo es un acierto o un fracaso?
Data Engineer
Publicado hace 27 días
Trabajo visto
Descripción Del Trabajo
+ Utilize Google Cloud Platform & Data Services to modernize legacy applications.
+ Understand technical business requirements and define engineering solutions that align to Ford Motor & Credit Companies Patterns and Standards.
+ Collaborate and work with global engineering teams to define analytics cloud platform strategy and build Cloud analytics solutions within enterprise data factory.
+ Provide Engineering leadership in design & delivery of new Unified data platform on GCP.
+ Understand complex data structures in analytics space as well as interfacing application systems. Develop and maintain conceptual, logical & physical data models. Design and guide Product teams on Subject Areas and Data Marts to deliver integrated data solutions.
+ Provide technical guidance for optimal solutions considering regional Regulatory needs.
+ Provide technical assessments on solutions and make recommendations that meet business needs and align with architectural governance and standard.
+ Guide teams through the enterprise processes and advise teams on cloud-based design, development, and data mesh architecture.
+ Provide advisory and technical consulting across all initiatives including PoCs, product evaluations and recommendations, security, architecture assessments, integration considerations, etc.
+ Leverage cloud AI/ML Platforms to deliver business and technical requirements.
**Required Skills and Selection Criteria:**
+ In-depth understanding of Google's product technology (or other cloud platform) and underlying architectures
+ 5+ years of analytics application development experience required
+ 5+ years of SQL development experience
+ 3+ years of Cloud experience (GCP preferred) with solution designed and implemented at production scale
+ Experience working in GCP based Big Data deployments (Batch/Real-Time) leveraging Terraform, Big Query, Big Table, Google Cloud Storage, PubSub, Data Fusion, Dataflow, Dataproc, Cloud Build, Airflow, Cloud Composer etc. or equivalent technology
+ Good understanding of domain driven design and data mesh principles.
+ Strong understanding on DevOps principles and practices, including continuous integration and deployment (CI/CD), automated testing & deployment pipelines.
+ Good understanding of cloud security best practices and be familiar with different security tools and techniques like Identity and Access Management (IAM), Encryption, Network Security, etc. Strong understanding of microservices architecture.
**Nice to Have**
+ Bachelor's degree in Computer science/engineering, Data science or related field.
+ Strong leadership, communication, interpersonal, organizing, and problem-solving skills
+ Good presentation skills with ability to communicate architectural proposals to diverse audiences (user groups, stakeholders, and senior management).
+ Experience in Banking and Financial Regulatory Reporting space.
+ Ability to work on multiple projects in a fast paced & dynamic environment.
+ Exposure to multiple, diverse technologies, platforms, and processing environments.
+ Google Professional Cloud Data Engineering certification.
+ Experience in migrating legacy analytics applications to Cloud platform and business adoption of these platforms to build insights and dashboards through deep knowledge of traditional and cloud Data Lake, Warehouse and Mart concepts.
**Requisition ID** : 49474
¿Este trabajo es un acierto o un fracaso?
Sé el primero en saberlo
Acerca de lo último Hadoop Empleos en Mexico !
Data Engineer
Publicado hace 15 días
Trabajo visto
Descripción Del Trabajo
**Job Description**
**The Future Begins Here:** At Takeda, we are creating a future-ready organization that uses data and digital to meet the needs of patients, our people, and the planet. We need your help to make this happen. Join our Innovation Capability Center (ICC) in Mexico City, Mexico.
**Our team is growing and for this we need bright minds with creativity and flexibility. What talent do you have?**
**At Takeda's ICC we Unite in Diversity** ** **
Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators' journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team.
**The Opportunity**
As a Data Engineer, you will be a key player in transforming the way various teams work with data across all production sites of the Global Manufacturing and Supply Organization (GMS).
With this you will help to improve our product quality and reduce loss while increasing efficiency, which has a direct impact on our ability to serve our patients with vital products.
**Responsibilities**
+ Contribute to the data engineering of existing and new IT Systems to design, to analyze and implement complex manufacturing data driven solutions, with an impact on the daily operation of our manufacturing processes and facilities.
+ Enable data analytics (process trending, process modelling, real-time analytics, and predictive analytics) in a GxP environment from an IT perspective for all GMS sites
+ Develop and maintain scalable data pipelines using AWS native technologies and Databricks to support increasing data sources, volumes, and complexity.
+ Drive identification of technical issues, problem-solving and escalating issues appropriately.
+ Ensure Digital Products and Platforms are built efficiently, scalable and on state-of-the-art technology.
+ Provides guidance to business partners and solves complex technical changes and implementations.
+ Follow company and departmental policies, procedures and ensure documentation according to the Takeda Quality Management Systems (QMS), Software Development Life Cycle and Project Life Cycle standards.
+ Collaborate with analytics and business teams to improve data models that enhance business intelligence tools and dashboards, fostering data-driven decision-making across the organization.
**Skills** **and** **Qualifications**
+ Bachelor's Degree in Computer Science or in a Technical/Natural Science (with focus on IT) or equivalent.
+ 1-3 years of experience in data engineering, or a related field.
+ In-Depth knowledge of SQL and Relational Databases,
+ Strong expertise in data modeling, and modern database technologies (Databricks, Oracle, MongoDB).
+ Basic understanding of batch-based manufacturing processes,
+ Understanding of Shop-Floor systems (MES, ERP, LIMS, Historian, etc.).
+ Understanding of Agile/SCRUM methodologies,
+ Understanding of good engineering practices (DevSecOps, source-code versioning).
+ Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams.
+ Fluency in English.
**Nice** **to** **Have**
+ Experience with one more of the following analytics applications (BioviaDiscoverant, Simca Online, PAS-X Savvy, Siemens gPROMS,
+ Experience with working in a highly regulated industry (e.g. pharmaceutical - or chemical industry)
+ Experience with AWS (S3, EC2, Terraform),
+ Experience in Computer Systems Validation GAMP,
+ Understanding of Data Science workflows, including Machine Learning and / or Deep Learning.
**Locations**
MEX - Santa Fe
**Worker Type**
Employee
**Worker Sub-Type**
Regular
**Time Type**
Full time
¿Este trabajo es un acierto o un fracaso?
Principal Data Engineer
Publicado hace 16 días
Trabajo visto
Descripción Del Trabajo
Principal Data Engineer
Location: Guadalajara, Jalisco
Your mission in the Data Engineering team:
At TouchTunes, your work matters, we are seeking a highly skilled and experienced Principal Data Platform Engineer to lead the design, development, and optimization of our data platform. This role is ideal for someone who thrives at the intersection of cloud architecture, big data engineering, and enabling AI/ML capabilities at scale. You will partner closely with data scientists, analysts, DevOps, and engineering teams to drive scalable, secure, and high-performing data solutions.
Your day-to-day:
· Architect and build scalable data platforms using AWS services such as S3, Glue, Lambda, Redshift, EMR, and CloudWatch.
· Design and optimize end-to-end ETL/ELT pipelines using Databricks, PySpark, Python, and SQL to support batch and real-time data workflows.
· Define, build, and maintain data models and warehouse structures optimized for analytics and ML workloads.
· Implement and maintain CI/CD pipelines for data workflows and ML models using Jenkins, Git, and other DevOps tools.
· Experience building and supporting real-time data pipelines using tools such as Kafka, Kinesis, or Structured Streaming.
· Drive the adoption of DataOps and MLOps best practices, ensuring robust testing, observability, monitoring, and rollback strategies.
· Partner with machine learning engineers to enable scalable model training, deployment, and monitoring pipelines.
· Establish and enforce data quality, governance, security, and cataloging standards (e.g., Unity Catalog).
· Evaluate and recommend new tools and frameworks that enhance the scalability and reliability of the data ecosystem.
· Mentor junior engineers, promote engineering excellence, and participate in architectural decision-making.
Your profile:
· 8+ years of experience in data engineering, with at least 3+ years in a principal or lead-level role.
· Strong experience with AWS data services (e.g., S3, Glue, Lambda, Redshift, EMR).
· Deep expertise in Databricks (clusters, jobs, Delta Lake, Unity Catalog, notebooks).
· Proficiency in Python and PySpark for developing large-scale data processing jobs.
· Advanced SQL skills, including complex joins, window functions, CTEs, and performance tuning.
· Hands-on experience with Jenkins for CI/CD automation in data/ML workflows.
· Solid understanding of DataOps/MLOps practices, including version control, testing, monitoring, and deployment of data pipelines and models.
· Experience with orchestration tools such as Airflow, dbt, or similar.
· Familiarity with data security, compliance, and governance frameworks.
· Strong problem-solving and communication skills; ability to lead cross-functional technical discussions.
Preferred Qualifications
· Familiarity with AI development tools and environments such as Cursor, and other modern code generation, debugging, or co-piloting platforms.
· Experience with containerization and orchestration (e.g., Docker, Kubernetes).
· AWS or Databricks certifications are a plus.
What’s in it for you:
At TouchTunes, your work impacts our customers as part of interesting projects that transform the in-venue entertainment industry. We foster open communication and collaboration across all levels, with approachable leaders that value all voices and empower you to excel and innovate. Our team thrives in an environment where fun meets hard work, and everyone is encouraged to be their authentic selves.
TouchTunes
At TouchTunes, the world’s largest in-venue interactive music and entertainment platform, we’re all about sparking joy and human connection. That moment in a bar where someone at the next table cues the jukebox to play your favorite song? That’s what we do – our platform plays millions of songs daily – but we’re also so much more! We just bought the largest soft-tipped darts business in the United States, we’re reinventing our mobile app for launch later this year, and we’re operating nearly 100,000 connected devices across North America and Europe. We’re innovators, strategic thinkers, people making the future possible today – and what a great time to join our team.
TouchTunes is a proud ally of QueerTech and DiversityJobs.
¿Este trabajo es un acierto o un fracaso?
Analytics Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
At Teradata, we believe that people thrive when empowered with better information. That's why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers-and our customers' customers-to make better, more confident decisions. The world's top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.
**What You'll Do**
+ As an Analytics Data Engineer on the CTO Data & Analytics team, you will play a pivotal role in helping to architect how data is leveraged throughout the company.
+ You will help design, build, and maintain a scalable data infrastructure that powers business intelligence and analytics initiatives. This role bridges the gap between data engineering and analytics, ensuring data is accessible, reliable, and optimized for analytical and business consumption.
+ You will design and develop ETL pipelines to ingest data from multiple sources into our data warehouse and analytical platforms to support our Customer Engagement Insights initiative.
+ The Analytics Data Engineer will build and maintain data models optimized for analytical queries and business reporting and focus on telemetry insights that will ultimately help drive business decisions.
+ You will develop a strong understanding of the raw data to ensure accuracy when creating business-user views.
+ You will collaborate with data analysts and business intelligence teams to understand requirements and translate them into technical solutions
+ You will partner with our corporate Data Architecture team (AI Data Hub) for the purpose of developing a knowledge base for AI agents and more.
+ Success with be determined through the launch and deployment of a business-centric data environment that enables telemetry and customer engagement insights while leveraging and AI data architecture
**Who You'll Work With**
+ This position resides within the Office of the Chief Technology Officer yet will have a far-reaching connection with the corporate enterprise architecture team and data stewards throughout the company.
+ As an Analytics Data Engineer you will be expected to partner with other data owners and their representative organizations (Enterprise Architecture, Product, Marketing, Services, GTM, and OCTO).
+ This role will report to the Senior Director of Data, Analytics and Technical Strategy
**What Makes You a Qualified Candidate**
+ Bachelor's degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience)
+ 3-5 years of experience in data analytics, data modeling, and data architecture.
+ Strong proficiency in SQL and experience with modern data warehouses
+ Programming experience in Python and/or another language (Java, Scala, etc.)
+ AI & machine learning background and understanding of AI architecture and knowledge bases.
+ Strong skills in SQL for data manipulation.
+ Strong written and verbal communication skills.
**What you will bring**
+ Exposure to data normalization, indexing, and data relationship concepts.
+ Strong proficiency in SQL for query optimization, data manipulation, and database management
+ Background in data structures and data architectures.
+ Proficiency with Microsoft Excel and PowerBI/Tableau.
+ Interest in Analytics, AI, and Large Language Models (LLM).
+ Willingness to collaborate and engage with cross-organizational teams.
#LI-DL1
Why We Think You'll Love Teradata
We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are committed to actively working to foster an inclusive environment that celebrates people for all of who they are.
¿Este trabajo es un acierto o un fracaso?