158 Ofertas de Google Cloud Professional Data Engineer en Mexico

Executive Director - Architecture & Data

México, México Empresa Confidencial

Ayer

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Executive Director - Architecture & Data


We are seeking a senior executive to lead the estrategy, desing, implementation, and governance of technology architectures and enterprice data management.


This role ensures that IT solutions remain scalables, secure, efficient and innovative to support long-term bussiness objectives.


Key responsabilities:

  • Define and oversee the technology architecture strategy, establishing the vision and framework to guide the development and evolution of the enterprise ecosystem.
  • Promote the adoption of emerging technologies and business models to enhance customer experience and generate value for the organization.
  • Supervise project execution and daily operations, ensuring performance indicators are consistently achieved.
  • Lead and develop multidiciplinary teams, fostering collaboration, innovation, and alignment with industry best practices and market trends.


What we´re looking for:

  • 10-15 years of leadership experience in the technology industry.
  • Master degree
  • Bilingual: english and spanish
  • Proven ability to manage cross-functional teams across the organization.
  • Deep knowledge of enterprise technology market trends, and complex problem-solving.
  • Strong executive presence, with excellent communication and strategic thinking skills.


This role offers the oppotunity to influence the technology strategy of an international company, delivering innovative customer focused solutions that create tangible value for the business.

Lo sentimos, este trabajo no está disponible en su región

Executive Director - Architecture & Data

Empresa Confidencial

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Executive Director - Architecture & Data

We are seeking a senior executive to lead the estrategy, desing, implementation, and governance of technology architectures and enterprice data management.

This role ensures that IT solutions remain scalables, secure, efficient and innovative to support long-term bussiness objectives.

Key responsabilities:

  • Define and oversee the technology architecture strategy, establishing the vision and framework to guide the development and evolution of the enterprise ecosystem.
  • Promote the adoption of emerging technologies and business models to enhance customer experience and generate value for the organization.
  • Supervise project execution and daily operations, ensuring performance indicators are consistently achieved.
  • Lead and develop multidiciplinary teams, fostering collaboration, innovation, and alignment with industry best practices and market trends.

What we ́re looking for:

  • 10-15 years of leadership experience in the technology industry.
  • Master degree
  • Bilingual: english and spanish
  • Proven ability to manage cross-functional teams across the organization.
  • Deep knowledge of enterprise technology market trends, and complex problem-solving.
  • Strong executive presence, with excellent communication and strategic thinking skills.

This role offers the oppotunity to influence the technology strategy of an international company, delivering innovative customer focused solutions that create tangible value for the business.

Lo sentimos, este trabajo no está disponible en su región

Data Engineer – Cloud & ETL Pipelines

64000 Monterrey, Nuevo León ITC Infotech

Publicado hace 15 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

About Us:

ITC Infotech is a leading global technology services and solutions provider, led by Business and Technology Consulting. ITC Infotech provides business-friendly solutions to help clients succeed and be future-ready, by seamlessly bringing together digital expertise, strong industry specific alliances and the unique ability to leverage deep domain expertise from ITC Group businesses. We provide technology solutions and services to enterprises across industries such as Banking & Financial Services, Healthcare, Manufacturing, Consumer Goods, Travel and Hospitality, through a combination of traditional and newer business models, as a long-term sustainable partner.


About the Role

We are looking for a skilled Data Engineer to join our data team based in Monterrey, Mexico . In this role, you will be responsible for designing, building, and maintaining robust, scalable ETL/ELT pipelines and data infrastructure across cloud environments. This position is ideal for someone who enjoys working with big data technologies, transforming raw data into actionable insights, and collaborating across technical and business teams.


Key Responsibilities

  • Design, develop, and manage scalable ETL/ELT pipelines for ingesting and transforming structured and unstructured data from multiple sources.
  • Build and maintain data lakes and data warehouses using modern cloud platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse ).
  • Write clean, efficient, and reusable code in Python, SQL, or Scala to process and transform raw data into curated datasets.
  • Implement and monitor data quality checks , validation frameworks , and alerting systems to ensure high-quality data delivery.
  • Work closely with data analysts, data scientists, and business stakeholders to understand data needs and deliver high-performing data solutions.
  • Manage batch and real-time data pipelines using orchestration tools like Apache Airflow, Azure Data Factory , or Kafka .
  • Follow best practices in data security, governance , and compliance with relevant standards and regulations.
  • Support CI/CD pipelines and participate in version control practices using Git , Jenkins , or similar tools.


Required Skills & Qualifications

  • Bachelor's degree in Computer Science , Engineering , or a related field.
  • 3+ years of experience in data engineering or similar technical roles.
  • Strong proficiency in:
  • SQL and data modeling
  • ETL tools (e.g., Airflow, dbt, Talend, Azure Data Factory)
  • Programming in Python or Scala
  • Cloud platforms : AWS, Azure, or GCP
  • Experience with data warehouse and data lake architecture.
  • Familiarity with tools like Kafka , Spark , Databricks , or Snowflake .
  • Solid understanding of version control systems and CI/CD pipelines .


Preferred Qualifications

  • Experience in Agile/Scrum environments.
  • Exposure to machine learning pipelines or advanced analytics workflows.
  • Knowledge of data governance , GDPR , or data privacy frameworks.
  • Professional certifications in cloud data services (AWS/GCP/Azure) are a plus.
  • Intermediate to advanced English proficiency (spoken and written).


Why Join Us?

  • Competitive salary and benefits package
  • Career growth opportunities within a global data team
  • Exposure to cutting-edge cloud and big data technologies
  • Flexible hybrid work model (based in Monterrey)
  • Collaborative and inclusive team culture



ITC Infotech is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. ITC infotech is committed to providing veteran employment opportunities to our service men and women.

Lo sentimos, este trabajo no está disponible en su región

Data Engineer – Cloud & ETL Pipelines

64000 Monterrey, Nuevo León ITC Infotech

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

About Us:

ITC Infotech is a leading global technology services and solutions provider, led by Business and Technology Consulting. ITC Infotech provides business-friendly solutions to help clients succeed and be future-ready, by seamlessly bringing together digital expertise, strong industry specific alliances and the unique ability to leverage deep domain expertise from ITC Group businesses. We provide technology solutions and services to enterprises across industries such as Banking & Financial Services, Healthcare, Manufacturing, Consumer Goods, Travel and Hospitality, through a combination of traditional and newer business models, as a long-term sustainable partner.

About the Role

We are looking for a skilled Data Engineer to join our data team based in Monterrey, Mexico. In this role, you will be responsible for designing, building, and maintaining robust, scalable ETL/ELT pipelines and data infrastructure across cloud environments. This position is ideal for someone who enjoys working with big data technologies, transforming raw data into actionable insights, and collaborating across technical and business teams.

Key Responsibilities

  • Design, develop, and manage scalable ETL/ELT pipelines for ingesting and transforming structured and unstructured data from multiple sources.
  • Build and maintain data lakes and data warehouses using modern cloud platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse).
  • Write clean, efficient, and reusable code in Python, SQL, or Scala to process and transform raw data into curated datasets.
  • Implement and monitor data quality checks, validation frameworks, and alerting systems to ensure high-quality data delivery.
  • Work closely with data analysts, data scientists, and business stakeholders to understand data needs and deliver high-performing data solutions.
  • Manage batch and real-time data pipelines using orchestration tools like Apache Airflow, Azure Data Factory, or Kafka.
  • Follow best practices in data security, governance, and compliance with relevant standards and regulations.
  • Support CI/CD pipelines and participate in version control practices using Git, Jenkins, or similar tools.

Required Skills & Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience in data engineering or similar technical roles.
  • Strong proficiency in:
  • SQL and data modeling
  • ETL tools (e.g., Airflow, dbt, Talend, Azure Data Factory)
  • Programming in Python or Scala
  • Cloud platforms: AWS, Azure, or GCP
  • Experience with data warehouse and data lake architecture.
  • Familiarity with tools like Kafka, Spark, Databricks, or Snowflake.
  • Solid understanding of version control systems and CI/CD pipelines.

Preferred Qualifications

  • Experience in Agile/Scrum environments.
  • Exposure to machine learning pipelines or advanced analytics workflows.
  • Knowledge of data governance, GDPR, or data privacy frameworks.
  • Professional certifications in cloud data services (AWS/GCP/Azure) are a plus.
  • Intermediate to advanced English proficiency (spoken and written).

Why Join Us?

  • Competitive salary and benefits package
  • Career growth opportunities within a global data team
  • Exposure to cutting-edge cloud and big data technologies
  • Flexible hybrid work model (based in Monterrey)
  • Collaborative and inclusive team culture

ITC Infotech is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. ITC infotech is committed to providing veteran employment opportunities to our service men and women.

Lo sentimos, este trabajo no está disponible en su región

Data Engineering Engineer

Takeda Pharmaceuticals

Publicado hace 9 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge.
**Job Description**
**Objective / Purpose**
Describe at the highest level the team where this job sits and how this role will contribute to the team's delivery of critical function. The Data Engineer will be a crucial member of the Data Science Institute, contributing to the development and optimization of data pipelines and architectures that support advanced data science initiatives. This role will enhance data accessibility, reliability, and efficiency across the organization.
**Accountabilities**
+ Assist in end-to-end data flow engineering and software development strategies.
+ Develop and manage data pipelines for extracting, transforming, and loading (ETL) data from various sources into data warehouses or data lakes.
+ Monitor the performance of data pipelines and infrastructure, identify bottlenecks, and optimize processes to improve efficiency and reliability.
+ Implement data observability practices to ensure data quality and publish relevant metrics to a catalog or repository.
+ Seek out new perspectives and opportunities to learn and apply skills to develop new talents.
+ Stay alert to industry trends, designs, and alternate views and approaches across technology, science, and operations.
+ Engage in software development, development tools, algorithms, and technologies related to data architectures, data engineering, and data science.
+ Utilize experience with complex analytic areas with diverse data and high dimensionality in life sciences or similarly complex areas.
+ Preferred experience with programming languages like Scala, Java, or Python and tools like Apache Spark, Apache Kafka, or Apache Airflow for building scalable and efficient pipelines.
**Education & Competencies (Technical and Behavioral)**
+ Bachelor's degree in Computer Science or equivalent.
+ 2+ years of relevant experience.
+ Foundational knowledge of computer science architecture, algorithms, and interface design.
+ Up-to-date specialized knowledge of data engineering, manipulation, and management technologies to affect change across business units, including an understanding of advanced methodologies of data and software development (life sciences experience preferred).
+ Ability to manipulate voluminous data with different degrees of structuring across disparate sources to build and communicate actionable insights for internal or external parties.
+ Software development skills and ability to contribute to the development of new data engineering and analytic services.
+ Knowledge in vibe coding is required to enhance data processing and pipeline efficiency.
+ Possess an attitude to learn and adapt to new technologies and methodologies, fostering continuous personal and professional growth.
+ **Good knowledge of Apache Spark and Scala or Java is required** for building scalable and efficient data pipelines.
**Locations**
MEX - Santa Fe
**Worker Type**
Employee
**Worker Sub-Type**
Regular
**Time Type**
Full time
Lo sentimos, este trabajo no está disponible en su región

Sr Data Engineering Engineer

Takeda Pharmaceuticals

Publicado hace 17 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge.
**Job Description**
**Objective / Purpose**
Describe at the highest level the team where this job sits and how this role will contribute to the team's delivery of critical function. The Data Engineer will be a crucial member of the Data Science Institute, contributing to the development and optimization of data pipelines and architectures that support advanced data science initiatives. This role will enhance data accessibility, reliability, and efficiency across the organization.
**Accountabilities**
+ Assist in end-to-end data flow engineering and software development strategies.
+ Develop and manage data pipelines for extracting, transforming, and loading (ETL) data from various sources into data warehouses or data lakes.
+ Monitor the performance of data pipelines and infrastructure, identify bottlenecks, and optimize processes to improve efficiency and reliability.
+ Implement data observability practices to ensure data quality and publish relevant metrics to a catalog or repository.
+ Seek out new perspectives and opportunities to learn and apply skills to develop new talents.
+ Stay alert to industry trends, designs, and alternate views and approaches across technology, science, and operations.
+ Engage in software development, development tools, algorithms, and technologies related to data architectures, data engineering, and data science.
+ Utilize experience with complex analytic areas with diverse data and high dimensionality in life sciences or similarly complex areas.
+ Preferred experience with programming languages like Scala, Java, or Python and tools like Apache Spark, Apache Kafka, or Apache Airflow for building scalable and efficient pipelines.
**Education & Competencies (Technical and Behavioral)**
+ Bachelor's degree in Computer Science or equivalent.
+ 4+ years of relevant experience.
+ Foundational knowledge of computer science architecture, algorithms, and interface design.
+ Up-to-date specialized knowledge of data engineering, manipulation, and management technologies to affect change across business units, including an understanding of advanced methodologies of data and software development (life sciences experience preferred).
+ Ability to manipulate voluminous data with different degrees of structuring across disparate sources to build and communicate actionable insights for internal or external parties.
+ Software development skills and ability to contribute to the development of new data engineering and analytic services.
+ Knowledge in vibe coding is required to enhance data processing and pipeline efficiency.
+ Possess an attitude to learn and adapt to new technologies and methodologies, fostering continuous personal and professional growth.
+ **Good knowledge of Apache Spark and Scala or Java is required** for building scalable and efficient data pipelines.
**Locations**
MEX - Santa Fe
**Worker Type**
Employee
**Worker Sub-Type**
Regular
**Time Type**
Full time
Lo sentimos, este trabajo no está disponible en su región

Data Engineering Analyst

Takeda Pharmaceuticals

Publicado hace 9 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge.
**Job Description**
About the role:
The Data Engineering Engineer will work closely with a multidisciplinary Agile team to build high-quality data pipelines that drive analytic solutions. This role aims to generate insights from connected data, enabling advanced data-driven decision-making capabilities.
How you will contribute:
* Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals
* Define data requirements, gather and mine large-scale structured and unstructured data, and validate data by running various data tools in the Big Data Environment
* Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value
* Lead the evaluation, implementation, and deployment of emerging tools and processes for analytic data engineering to improve productivity
* Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes
* Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives
* Solve complex data problems to deliver insights that help our business achieve their goals
Skills and qualifications:
* Bachelor's degree in Computer Science or equivalent
* 1-2 years of experience on data management
* Applies basic SQL commands and utilizes guidelines to perform simple data queries in relational databases
* Develops and optimizes data pipelines in a Big Data environment, ensuring scalability and efficiency
* Utilizes relevant data tools such as Power BI and Databricks
* Hands on experience with AWS. Knowledge in Azure is a plus
* Supports complex data modeling tasks, collaborating with AI/ML Engineers to enhance data product features
* Implements DevOps practices in data pipeline development and maintenance for continuous integration
* Utilizes programming languages like Python, Scala, or Java to develop data processing applications
* Manages cloud-based data solutions, leveraging platforms like Kubernetes for deployment
* Demonstrates understanding of data structures and algorithms to optimize data storage and retrieval
* Integrates multiple systems and ensures consistent and reliable data flow across various platforms
* Engages in problem-solving to address data-related issues and improve existing systems
* Follows established procedures for data quality control and validation to ensure accuracy and reliability of data sources
* Explores new technologies and tools in the field of data engineering to keep up with industry trends
* Provides mentorship and guidance to less experienced colleagues in the field of data engineering
As an entry-level professional, you will tackle challenges within a focused and manageable scope. Your role is pivotal in applying core theories and concepts to practical scenarios, reflecting a seamless transition from academic excellence to professional application. You will harness standard methodologies to evaluate situations and data, cultivating a budding understanding of industry practices. Typically, this role requires a bachelor or college degree or the equivalent professional experience. Your role is characterized by growth and learning, while your journey within Takeda will evolve, fostering valuable internal relationships.
**Locations**
MEX - Santa Fe
**Worker Type**
Employee
**Worker Sub-Type**
Regular
**Time Type**
Part time
Lo sentimos, este trabajo no está disponible en su región
Sé el primero en saberlo

Acerca de lo último Google cloud professional data engineer Empleos en Mexico !

Senior Software Engineer, Data Engineering

Ciudad de México, Distrito Federal Recruiting From Scratch

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

**Who is Recruiting from Scratch**:
Recruiting from Scratch is a premier talent firm that focuses on placing the best product managers, software, and hardware talent at innovative companies. Our team is 100% remote and we work with teams across the United States to help them hire. We work with companies funded by the best investors including Sequoia Capital, Lightspeed Ventures, Tiger Global Management, A16Z, Accel, DFJ, and more.

This role will be a senior engineering role at one of our clients that are early stage all the way to IPO / Private companies.

Our Client

Our Client is the global leader in ecommerce technology, helping companies seize the full potential of every transaction moment to grow revenue and acquire new customers at scale. Live Nation, Groupon, Staples, Lands' End, Fanatics, UrbanStems, GoDaddy, Vistaprint and HelloFresh are among the more than 2,500 leading global businesses and advertisers that are using our client's solutions to drive more value through every transaction by offering highly relevant messages to their customers at the moment they are most likely to convert.

With their December 2021 Series E raise of USD$325M, our client's is expanding rapidly and globally - operating in 19 countries across North America, Europe and the Asia-Pacific region with the largest office in NYC and a major R&D hub in Sydney. With annual revenues of more than US$200M and vibrant company culture, Our client has been listed in ‘Great Places to Work’ in the US and Australia. Their award-winning culture is guided by our five core values: Smart with Humility, Own the Outcomes, Force for Good, Conquer New Frontiers, and Enjoy the Ride. These values help us attract, engage, and develop the right talent around the globe and ensure we have the right conditions to do our best work.

The engineering team builds best-in-class ecommerce technology that provides personalized and relevant experiences for customers globally and empowers marketers with sophisticated, AI-driven tooling to better understand consumers. Our bespoke platform handles millions of transactions per day and considers billions of data points which give engineers the opportunity to build technology at scale, collaborate across teams and gain exposure to a wide range of technology. We are expanding rapidly in our major R&D centers in NYC and Sydney. We are passionate about using intelligent systems to improve the transaction moment for retailers everywhere. Come join us and build the future!

**Requirements**:
About the role

We’re working on building a platform for reporting and analytics that is able to handle huge amounts of data in a real-time fashion that would allow us to uncover new insights and help us make decisions.

Our goal is to unlock data and make it available to various users starting from other engineers to end business users and clients. We value pragmatic solutions and simplicity that help us build reliable and fast systems.

Outcomes & responsibilities
- Build distributed, high-volume data pipelines and storage that power our reporting and analytics
- Work on real-time distributed OLAP custom solutions
- Do it with Spark, Kafka, Airflow, and other open-source technologies
- Work all over the stack, moving fluidly between programming languages: Scala, Python, and more
- You'll help define the processes and infrastructure to transform and make data readily available across the company
- Join a tightly knit team solving hard problems the right way
- Own meaningful parts of our service, have an impact, grow with the company
- Take responsibility for system health, monitoring and alerting, and CI/CD pipelines
- Support and mentor other engineers on best practices, architecture, and quality

Capabilities & requirements
- You have built and operated data pipelines for real customers in production systems
- You are fluent in several programming languages (JVM & otherwise)
- You’ve worked with data stores and/or data warehouses, such as AWS Redshift, Snowflake, Clickhouse, or others
- You have hands-on experience with BigData frameworks (Hadoop, Hive, Spark, etc.)
- You’re able to explain advanced technical concepts in a simple manner and cater to your audience
- You enjoy wrangling huge datasets and helping others unlock new insights
- You’re concerned about resiliency, high-availability, data quality, and other aspects of a critical system

**Benefits**
- **Force for Good. **We actively invest in the growth of our people and the strengthening of our communities. Our NYC office is 100% vaccinated to keep our employees and community safe and healthy. We require all rockstars as well as anyone else who will be onsite at the NYC office - clients, contractors, vendors, and suppliers - to show proof of vaccination and their booster shot.
- ** Work with the greatest talent in town. **Our recruiting process is tough. We hold a high bar because we have a high-performing, high-velocity culture - we only want the brightest a
Lo sentimos, este trabajo no está disponible en su región

Data Engineering Lead (Data Sharing)

Distrito Federal, Distrito Federal Spin

Publicado hace 8 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Join to apply for the Data Engineering Lead (Data Sharing) role at Spin

2 weeks ago Be among the first 25 applicants

Join to apply for the Data Engineering Lead (Data Sharing) role at Spin

Ensure alignment with the data strategy of their assigned Data Project. This role combines technical leadership with people management, guiding the team to deliver high-quality data solutions that meet organizational OKRs and drive business value. Responsible for identifying and mitigating risks, setting standards for code quality and best practices, and fostering a secure, compliant, and data-driven culture. Will work closely with other Data Leads to innovate, and Business Roles, enhance team capabilities, and establish cohesive data practices across the organization.
Main Responsibilities

  • Receives requirements for Data Products and/or Solutions and is responsible for conducting Understanding sessions to identify the list of ingestions and/or processes necessary to add value to the business where they are assigned.
  • Includes all involved areas in the Understanding stage to determine if the necessary inputs are available to address the requested requirements and to identify the viability of the Data Solution, as well as defining scoped scope to avoid rework due to incomplete definitions.
  • Supports data engineers in reviewing estimates based on their expert judgment considering timelines from all involved areas (architecture, security, SRE, data governance, etc.), including deployment, and ensures that deliveries are of quality, on time, and in form, avoiding rework.
  • Actively contributes to planning backlog tasks when a data project, product, and/or solution is authorized.
  • Provides a sprint status report of the project and/or data solution to the business where they are assigned and ensures that deliveries are on time, in form, and with quality.
  • Warns and informs the business of risks in a timely manner to mitigate them or propose contingencies.
  • Reviews the efforts of the data engineers under their charge and provides guidance in case the development does not meet standards, guidelines, or best practices.
  • Conducts rituals and weekly 1-to-1 follow-ups with the data engineers under their charge to assist them in case of doubts, to identify needs in meeting their performance, and to motivate adherence to best practices and compliance with guidelines.
  • Diligently and proactively contributes to all phases of the data engineering lifecycle with Agile methodology, avoiding reworks and delivering on time, in form, and with quality.
  • Analyzes and proposes technical solutions for data storage using best practices, standards, and data governance guidelines, data privacy strategies, security, and compliance.
  • Ensures the continuity of digital data solutions, insights, dashboards, etc., to build and consolidate a Data-Driven culture.
  • Collaborates and contributes to the development of monitoring processes and data quality metrics to ensure that the data used by the business is reliable, intact, and complete.
  • Promote an autonomous work culture by encouraging self-management, accountability, and proactive problem-solving among team members.
  • Serve as a Spin Culture Ambassador to foster and maintain a positive, inclusive, and dynamic work environment that aligns with the company's values and culture.
Required Knowledge and Experience
  • Minimum 7 years in Data Engineering or related fields, with at least 1-2 years in a technical leadership role overseeing and mentoring Data Engineers. Demonstrates experience in managing complex projects, coordinating team efforts, and ensuring alignment with organizational goals.
  • Applies expert understanding of the Data Engineering Lifecycle, with proficiency in data processing techniques across batch, micro-batch, near real-time, and real-time data solutions.
  • Brings advanced knowledge in cloud computing environments, with proven expertise in using Google Cloud Platform (GCP) and Amazon Web Services (AWS) data stacks to build, deploy, and optimize scalable data infrastructure.
  • Demonstrates strong skills in software development life cycle (SDLC) methodologies and design patterns, ensuring code quality and maintainability.
  • Possesses advanced skills in data architecture and solution design, with the ability to develop and maintain data models and architectures that align with business needs and strategic objectives.
  • Uses extensive knowledge of data privacy, security, and governance best practices to implement and maintain robust compliance and data protection measures.
  • Proficient in dimensional data modeling, ETL/ELT frameworks, and storage solutions for structured, semi-structured, and unstructured data, including handling formats like CSV, JSON, Parquet, and Yaml.
  • Exhibits advanced expertise in Python, Java, and SQL, along with hands-on experience in data frameworks and tools for processing, analyzing, and optimizing data workflows.
  • Skilled in version control systems (Git, GitHub, GitLab), ensuring team collaboration and project continuity across data engineering projects.
  • Communicates effectively with both technical and business stakeholders, translating complex technical concepts into actionable insights and ensuring alignment on project objectives.
  • Fosters a collaborative and data-driven culture by guiding team members on best practices, setting code quality standards, and driving continuous improvement in data engineering methodologies.
Spin está comprometida con un lugar de trabajo diverso e inclusivo.
Somos un empleador que ofrece igualdad de oportunidades y no discrimina por motivos de raza, origen nacional, género, identidad de género, orientación sexual, discapacidad, edad u otra condición legalmente protegida.
Si desea solicitar una adaptación, notifique a su Reclutador. Seniority level
  • Seniority level Not Applicable
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology

Referrals increase your chances of interviewing at Spin by 2x

Get notified about new Data Specialist jobs in Mexico City, Mexico .

Mexico City Metropolitan Area 5 months ago

Mexico City Metropolitan Area 1 month ago

Junior Data Analytics / R+D - Remote Work | REF#191057

Mexico City Metropolitan Area 3 weeks ago

Mexico City Metropolitan Area 5 months ago

Mexico City Metropolitan Area 2 weeks ago

Entry Level Data Backend Developer (Remote - Mexico)

Mexico City Metropolitan Area 1 month ago

Business Intelligence Professional (Data Analytical)

Mexico City Metropolitan Area 2 weeks ago

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Senior Data Engineering Specialist

Zapopan, Jalisco beBeeDataEngineer

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Are you looking for a challenging opportunity in data engineering? We have an exciting role available for a skilled professional to join our team.

The position involves designing, implementing and optimizing large-scale data pipelines, ensuring scalability, reliability and performance. This requires working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.

The ideal candidate will have strong expertise in Databricks, as well as proficiency in Azure Cloud Services. They will also have a solid understanding of Spark and PySpark for big data processing, and experience in relational databases.

Responsibilities:
  • Design and implement scalable ETL/ELT pipelines using Databricks.
  • Leverage PySpark/Spark and SQL to transform and process large datasets.
  • Integrate data from multiple sources, including Azure Blob Storage, ADLS and other relational/non-relational systems.
Requirements:
  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)
  • Proficiency in Azure Cloud Services.
  • Solid understanding of Spark and PySpark for big data processing.
  • Experience in relational databases.
  • Knowledge on Databricks Asset Bundles and GitLab.
  • Familiarity with Databricks Runtimes and advanced configurations.
  • Knowledge of streaming frameworks like Spark Streaming.
  • Experience in developing real-time data solutions.
  • Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification.
What We Offer:
  • A highly competitive compensation package.
  • A multinational organization with opportunities to work abroad.
  • Laptop/equipment.
  • Paid annual leave and sick leave.
  • Maternity & Paternity leave plans.
  • Comprehensive insurance plan.
  • Retirement savings plans.
  • Higher education certification policy.
  • Extensive training opportunities.
  • Cutting edge projects at leading financial institutions.
  • A flat and approachable organization culture.
Lo sentimos, este trabajo no está disponible en su región

Ubicaciones cercanas

Otros trabajos cerca de mí

Industria

  1. gavelAdministración Pública
  2. workAdministrativo
  3. ecoAgricultura y Silvicultura
  4. restaurantAlimentos y Restaurantes
  5. apartmentArquitectura
  6. paletteArte y Cultura
  7. diversity_3Asistencia Social
  8. directions_carAutomoción
  9. flight_takeoffAviación
  10. account_balanceBanca y Finanzas
  11. spaBelleza y Bienestar
  12. shopping_bagBienes de consumo masivo (FMCG)
  13. point_of_saleComercial y Ventas
  14. shopping_cartComercio Electrónico y Medios Sociales
  15. shopping_cartCompras
  16. constructionConstrucción
  17. supervisor_accountConsultoría de Gestión
  18. person_searchConsultoría de Selección de Personal
  19. request_quoteContabilidad
  20. brushCreativo y Digital
  21. currency_bitcoinCriptomonedas y Blockchain
  22. health_and_safetyCuidado de la Salud
  23. schoolEducación y Formación
  24. boltEnergía
  25. medical_servicesEnfermería
  26. biotechFarmacéutico
  27. manage_accountsGestión
  28. checklist_rtlGestión de Proyectos
  29. child_friendlyGuarderías y Educación Infantil
  30. local_gas_stationHidrocarburos
  31. beach_accessHostelería y Turismo
  32. codeInformática y Software
  33. foundationIngeniería Civil
  34. electrical_servicesIngeniería Eléctrica
  35. precision_manufacturingIngeniería Industrial
  36. buildIngeniería Mecánica
  37. scienceIngeniería Química
  38. handymanInstalación y Mantenimiento
  39. smart_toyInteligencia Artificial y Tecnologías Emergentes
  40. scienceInvestigación y Desarrollo
  41. gavelLegal
  42. clean_handsLimpieza y Saneamiento
  43. inventory_2Logística y Almacenamiento
  44. factoryManufactura y Producción
  45. campaignMarketing
  46. local_hospitalMedicina
  47. perm_mediaMedios y Relaciones Públicas
  48. constructionMinería
  49. sports_soccerOcio y Deportes
  50. medical_servicesOdontología
  51. schoolPrácticas
  52. emoji_eventsRecién Graduados
  53. groupsRecursos Humanos
  54. securitySeguridad de la Información
  55. local_policeSeguridad y Vigilancia
  56. policySeguros
  57. support_agentServicio al Cliente
  58. home_workServicios Inmobiliarios
  59. diversity_3Servicios Sociales
  60. wifiTelecomunicaciones
  61. psychologyTerapia
  62. local_shippingTransporte
  63. storeVenta al por menor
  64. petsVeterinaria
Ver todo Google Cloud Professional Data Engineer Empleos