453 Ofertas de Analistas de Datos en Mexico
Big Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
Visa is a world leader in digital payments, facilitating more than 215 billion payments transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable and secure payments network, enabling individuals, businesses and economies to thrive.
When you join Visa, you join a culture of purpose and belonging - where your growth is priority, your identity is embraced, and the work you do matters. We believe that economies that include everyone everywhere, uplift everyone everywhere. Your work will have a direct impact on billions of people around the world - helping unlock financial access to enable the future of money movement.
**Join Visa: A Network Working for Everyone.**
**Job Description**:
Visa Consulting and Analytics (VCA), the consulting arm of Visa, is a global team of industry experts in strategy, marketing, operations, risk and economics consulting, with decades of experience in the payments industry.
Our VCA teams offers:
- Consulting services customized to the needs of Visa client's business objectives and strategy
- Business and economic insights and perspectives that impact business and investment decisions
- Self-service digital solutions Visa clients can leverage to improve performance in product, marketing and operations
- Proven data-driven marketing strategies to increase clients' ROI
He/She must have experience using a variety of data mining/data analysis methods, using a variety of distributed data platforms, and leveraging the latest open-source technologies. He/She must have a proven ability to drive business results with their data-based insights. Adept at creative and critical thinking, be able to deconstruct problems and transform insights into large scale, state-of-the-art solutions.
**Responsibilities**
- Automate and standardize data processes developed by team members.
- Leverage DevOps to create end-to-end streamline CI/CD data and ML pipelines.
- Review and manage data pipelines, branching, and deployment process.
- Work with partners on requirements and implementation designs of data solutions.
- Implement data quality framework at scale using open-source technologies.
- Create data monitoring dashboards with real-time notifications.
- Unify data engineering and machine learning engineering pipelines.
- Document process, designs, test results, and analysis.
- Ability to articulate complex architectures to non-technical audiences, management, and leadership.
- Continuously research industry best practices and technologies.
- Evangelize end to end automation and standardization across the organization.
- Partner with functional areas, and regional and global teams to leverage the breadth and depth of Visa’s resources.
- This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs._
**Qualifications**:
Basic Qualifications
- BA/BS required, MBA or other relevant Master’s degree preferred (e.g. engineering, computer science, computer engineering, applied mathematics, or other related fields)
Preferred Qualifications
- At least 5 years of experience as data engineer or data scientist with open-source tools.
- Experience in retail banking, payments, financial services, and/or technology industries is a plus. Strong interest in the future of payments is a must.
- Strong technical competency and experience with shell-scripting and Linux systems.
- Experience with CI/CD pipeline using Azure DevOps, GitHub actions, Jenkins, or Airflow.
- Strong coding skills in Spark, Python and SQL to manipulate big data in distributed platforms.
- Good to have experience in navigating in Linux/Unix/Container based apps such as Docker, Kubernetes, or Microservices environments.
- Knowledge in how to leverage AI assistance tools like chatGPT for creating and debugging code.
- Ability to interact with big data clusters using Jupiter Notebooks, terminal, or GUI.
- Demonstrate experience leveraging open-source tools, libraries, and platforms.
- Experience with data visualization and business intelligence tools like Tableau, PowerBI, Microstrategy, or Excel.
- Problem solving ability and process creator with strategic focus on replicability, scalability, innovation, and governance.
- Proficient with git for version control and code collaboration using branches and pull requests.
- Must be passionate about automation and data and able to deliver high quality work.
- Experience developing as part of Agile/Scrum team.
- Fluency in English (spoken/written). Portuguese or Spanish is a plus.
Additional Information
Visa is an EEO Employer. Qualified applicants will receive c
Big Data Lead
Hoy
Trabajo visto
Descripción Del Trabajo
**BIG DATA LEAD**
**Requisitos**:
Lic o Ingenieria Informatica, Sistemas, o afin
Inglés conversacional y escrito **AVANZADO?**
**Experiência**:
8+ años de experiência en Tecnologías de la Información; 6+ años en desarrollo de Data Warehouse y ETL y 4+ años de sólida experiência en diseño e implementación de una solución totalmente operativa en Snowflake Data Warehouse. Conocimientos profundos de Data Warehousing, conceptos ETL y principios de estructura de modelado.
Excelente comprensión de los aspectos internos de Snowflake y de la integración de Snowflake con otras tecnologías de procesamiento de datos y generación de informes Experiência práctica con las utilidades de Snowflake como SnowSQL, SnowPipe, experiência en la administración de Snowflake, experiência con la carga de datos desde la nube (Azure) y API, etc. Conocimiento en la arquitectura de Snowflake Experiência en SQL es imprescindible. Experiência trabajando con datos semi-estructurados
Experiência con la creación de modelos dbt(data build tool) para snowflake Experiência en componentes de plataforma de ingeniería como Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics Experiência práctica en la implementación de soluciones de inteligencia de datos a gran escala en torno a Snowflake DW Experiência en lenguajes de scripting como Python o Scala. Comprensión del diseño de API RESTful Pasión por las mejores prácticas de la industria y la programación informática
**Ofrecemos**:
Contratacion directa
Esquema 100% nomina
Prestaciones de ley y superiores
Posición: HIBRIDA
Si estas interesado envia tu Cv en inglés por este medio
Tipo de puesto: Tiempo completo
Salario: $70,000.00 - $75,000.00 al mes
Horario:
- Diurno
- Turno de 8 horas
Prestaciones:
- Programa de referidos
- Seguro de gastos médicos mayores
- Seguro de vida
- Vales de despensa
Idioma:
- Inglés (Obligatorio)
Lugar de trabajo: Empleo presencial
Big Data Lead
Hoy
Trabajo visto
Descripción Del Trabajo
**BIG DATA LEAD**
**Requisitos**:
Lic o Ingenieria Informatica, Sistemas, o afin
Inglés conversacional y escrito **AVANZADO?**
**Experiência**:
8+ años de experiência en Tecnologías de la Información; 6+ años en desarrollo de Data Warehouse y ETL y 4+ años de sólida experiência en diseño e implementación de una solución totalmente operativa en Snowflake Data Warehouse. Conocimientos profundos de Data Warehousing, conceptos ETL y principios de estructura de modelado.
Excelente comprensión de los aspectos internos de Snowflake y de la integración de Snowflake con otras tecnologías de procesamiento de datos y generación de informes Experiência práctica con las utilidades de Snowflake como SnowSQL, SnowPipe, experiência en la administración de Snowflake, experiência con la carga de datos desde la nube (Azure) y API, etc. Conocimiento en la arquitectura de Snowflake Experiência en SQL es imprescindible. Experiência trabajando con datos semi-estructurados
Experiência con la creación de modelos dbt(data build tool) para snowflake Experiência en componentes de plataforma de ingeniería como Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics Experiência práctica en la implementación de soluciones de inteligencia de datos a gran escala en torno a Snowflake DW Experiência en lenguajes de scripting como Python o Scala. Comprensión del diseño de API RESTful Pasión por las mejores prácticas de la industria y la programación informática
**Ofrecemos**:
Contratacion directa
Esquema 100% nomina
Prestaciones de ley y superiores
Posición: HIBRIDA
Si estas interesado envia tu Cv en inglés por este medio
Tipo de puesto: Tiempo completo
Salario: $70,000.00 - $75,000.00 al mes
Horario:
- Diurno
- Turno de 8 horas
Prestaciones:
- Programa de referidos
- Seguro de gastos médicos mayores
- Seguro de vida
- Vales de despensa
Idioma:
- Inglés (Obligatorio)
Lugar de trabajo: Empleo presencial
Big Data Architect
Hoy
Trabajo visto
Descripción Del Trabajo
**The Mission**:
At Caylent, a Big Data Architect works as an integral part of a cross-functional delivery team to implement design data management solutions on the AWS cloud for our customers. You will design and document the big data and NoSQL solutions, and provide guidance to the engineers performing the hands-on implementation of your design. You will participate in daily standup meetings with your team and bi-weekly agile ceremonies with the customer. Your manager will have a weekly 1:1 with you to help guide you in your career and make the most of your time at Caylent.
**Your Assignment**:
- Work with a team to deliver top-quality data solutions on AWS for customers
- Participate in daily standup meetings and address technical issues
- Design, optimization and migration of web-scale data processing operations
- Lead and help engineers without any direct supervision
**Your Qualifications**:
- Design and implementation of at least two of these:
- ETL, Orchestration and CI/CD pipelines
- Data Lakes, Data Warehouses
- Analytics and visualization
- Design and implementation of at least two of these:
- Data processing: eg. Hadoop, Spark, EMR
- Streaming/Messaging: eg. Kafka, RabbitMQ, Kinesis
- NoSQL DBs like KeyValue stores, Document Databases, Graph Databases
- Caching: eg. Redis, Memcache
- Search: eg. ElasticSearch, Solr
- Design and implementation of at least one of these:
- Security, access controls and governance on cloud
- Experience with IaC tools such as CloudFormation, CDK, Terraform, and CI/CD tools
- Experience with AWS Glue, Lambda, SDK
- Excellent written and verbal communication skills
**Benefits**:
- 100% remote work
- Medical Insurance for you and eligible dependents
- Generous holidays and flexible PTO
- Competitive phantom equity
- Paid for exams and certifications
- Peer bonus awards
- State of the art laptop and tools
- Equipment & Office Stipend
- Individual professional development plan
- Annual stipend for Learning and Development
- Work with an amazing worldwide team and in an incredible corporate culture
Caylent is a place where everyone belongs. We celebrate diversity and are committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at Caylent.
Desarrollador Big Data
Hoy
Trabajo visto
Descripción Del Trabajo
- **Actividades**:_
- Crear procedimientos almacenados y vistas
- Optimizar consultas y mejorar el rendimiento
- Validar desarrollos en entorno productivo
- Mantener y actualizar módulos desarrollados
- **Indispensable**:_
- Experiência de **3 años en adelante**:
- Licenciatura o Ingeniería **concluida**:
- **Scala**:
- Integración de Control-M
- **Cloudera**:
- Desarrollo en Java o Python
- Automatización de procesos
- **Apache Spark**:
- Conocimiento en bases de datos SQL
**Aspectos a considerar: 100% presencial**
Tipo de puesto: Tiempo completo
Sueldo: $38,000.00 - $40,000.00 al mes
Tipo de jornada:
- Turno de 8 horas
Puede trasladarse/mudarse:
- Tlalpan, CDMX: Trasladarse al trabajo sin problemas o planear mudarse antes de comenzar a trabajar (Obligatorio)
Pregunta(s) de postulación:
- La modalidad de trabajo para esta vacante es completamente presencial en Tlalpan, CDMX, sin opción a remoto o híbrido. ¿Está de acuerdo con esta exigencia?
Escolaridad:
- Licenciatura terminada (Obligatorio)
Experiência:
- Apache Spark: 3 años (Obligatorio)
- Scala: 3 años (Obligatorio)
- Cloudera: 3 años (Obligatorio)
Lugar de trabajo: Empleo presencial
Desarrollador de Big Data
Ayer
Trabajo visto
Descripción Del Trabajo
Be a part of Stefanini !
At Stefanini we are more than 30,000 geniuses, connected from more than 40 countries, we co-create a better future.
¡Apply Big Data Engineer !
Requirements:
- 3 years of years of BIG data development experience.
- Experienc designing, developing, and operating large-scale data systems running at petabyte scale.
- Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
- Able to interact with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
- Intermediate English
Essential Duties and Responsibilities:
- Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
- Experience with Java , Python to write data pipelines and data processing layers
- Experience in Airflow & Github .
- Experience in writing map-reduce jobs.
- Demonstrates expertise in writing complex, highly-optimized queries across large data sets
- Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
- Highly Proficient in SQL .
- Experience with Cloud Technologies (GCP , Azure )
- Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid )
- Provides and supports the implementation and operations of the data pipelines and analytical solutions
- Performance tuning experience of systems working with large data sets
- Experience in REST API data service – Data Consumption
- Retail experience is a huge plus.
What’s in for you?
- Fully remote
- Training Path
- Life insurance
- Punctuality bonus
- Grocery vouchers
- Restaurant vouchers
- Legal benefits + Profit sharing (PTU)
- Learning and Mentoring platforms
- Discounts at language schools
- Gym discount
Desarrollador de Big Data
Ayer
Trabajo visto
Descripción Del Trabajo
Be a part of Stefanini!
At Stefanini we are more than 30,000 geniuses, connected from more than 40 countries, we co-create a better future.
¡Apply Big Data Engineer!
Requirements:
- 3 years of years of BIG data development experience.
- Experienc designing, developing, and operating large-scale data systems running at petabyte scale.
- Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
- Able to interact with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
- Intermediate English
Essential Duties and Responsibilities:
- Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
- Experience with Java, Python to write data pipelines and data processing layers
- Experience in Airflow & Github.
- Experience in writing map-reduce jobs.
- Demonstrates expertise in writing complex, highly-optimized queries across large data sets
- Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
- Highly Proficient in SQL.
- Experience with Cloud Technologies (GCP, Azure)
- Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid)
- Provides and supports the implementation and operations of the data pipelines and analytical solutions
- Performance tuning experience of systems working with large data sets
- Experience in REST API data service – Data Consumption
- Retail experience is a huge plus.
What’s in for you?
- Fully remote
- Training Path
- Life insurance
- Punctuality bonus
- Grocery vouchers
- Restaurant vouchers
- Legal benefits + Profit sharing (PTU)
- Learning and Mentoring platforms
- Discounts at language schools
- Gym discount
Sé el primero en saberlo
Acerca de lo último Analistas de datos Empleos en Mexico !
Desarrollador de Big Data
Hoy
Trabajo visto
Descripción Del Trabajo
Be a part of Stefanini
At Stefanini we are more than 30,000 geniuses, connected from more than 40 countries, we co-create a better future.
Apply Big Data Engineer
Requirements:
- 3 years of years of BIG data development experience.
- Experienc designing, developing, and operating large-scale data systems running at petabyte scale.
- Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
- Able to interact with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
- Intermediate English
Essential Duties and Responsibilities:
- Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
- Experience with Java, Python to write data pipelines and data processing layers
- Experience in Airflow & Github.
- Experience in writing map-reduce jobs.
- Demonstrates expertise in writing complex, highly-optimized queries across large data sets
- Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
- Highly Proficient in SQL.
- Experience with Cloud Technologies (GCP, Azure)
- Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid)
- Provides and supports the implementation and operations of the data pipelines and analytical solutions
- Performance tuning experience of systems working with large data sets
- Experience in REST API data service – Data Consumption
- Retail experience is a huge plus.
What's in for you?
- Fully remote
- Training Path
- Life insurance
- Punctuality bonus
- Grocery vouchers
- Restaurant vouchers
- Legal benefits + Profit sharing (PTU)
- Learning and Mentoring platforms
- Discounts at language schools
- Gym discount
Big Data Intern Bilingual
Hoy
Trabajo visto
Descripción Del Trabajo
Big Data Intern Bilingual - Half Time
*Necesariamente con Seguro Facultativo*
Reports to: Product Center Manager
Location: Polanco, near to Plaza Carso
Job Type: Part-time Internship
Requirements:
1. Student of Data Science, Data Engineering, Data Analysis, Finance or related field.
2. Basic knowledge of Data Analysis or related.
3. Communication and teamwork skills.
What We Offer:
1. Opportunity to gain experience in a creative and dynamic environment.
2. Chance to develop Data Analysis skills and knowledge.
3. Teamwork with experienced professionals in the field.
How to Apply:
If you are interested in this position, please send your resume in English
Pls send updated CV in English
*Necesariamente con Seguro Facultativo*
Big Data Solutions Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
As a big data professional, you will play a vital role in co-creating a better future by leveraging technology and innovation.
Our team is comprised of 30,000+ professionals from over 40 countries, working together to design, develop, and operate large-scale data systems running at petabyte scale.
- Mastery in BIG data development with 3 years of experience is required.
- Proven expertise in designing, developing, and operating large-scale data systems.
- Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
- Able to collaborate with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
Key Responsibilities:
- Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volumes of data; focusing on scalability, low-latency, and fault-tolerance in every system built.
- Proficiency in Java and Python for writing data pipelines and data processing layers.
- Experience with Airflow & Github.
- Expertise in writing map-reduce jobs.
- Demonstrates expertise in writing complex, highly-optimized queries across large data sets.
- Proven working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
- Highly proficient in SQL.
- Experience with Cloud Technologies (GCP, Azure).
- Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid).
- Provides and supports the implementation and operations of data pipelines and analytical solutions.
- Performance tuning experience of systems working with large data sets.
- Experience in REST API data service – Data Consumption.
- Retail experience is a huge plus.