453 Ofertas de Analistas de Datos en Mexico

Big Data Engineer

Ciudad de México, Distrito Federal Visa

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Company Description

Visa is a world leader in digital payments, facilitating more than 215 billion payments transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable and secure payments network, enabling individuals, businesses and economies to thrive.

When you join Visa, you join a culture of purpose and belonging - where your growth is priority, your identity is embraced, and the work you do matters. We believe that economies that include everyone everywhere, uplift everyone everywhere. Your work will have a direct impact on billions of people around the world - helping unlock financial access to enable the future of money movement.

**Join Visa: A Network Working for Everyone.**

**Job Description**:
Visa Consulting and Analytics (VCA), the consulting arm of Visa, is a global team of industry experts in strategy, marketing, operations, risk and economics consulting, with decades of experience in the payments industry.

Our VCA teams offers:

- Consulting services customized to the needs of Visa client's business objectives and strategy
- Business and economic insights and perspectives that impact business and investment decisions
- Self-service digital solutions Visa clients can leverage to improve performance in product, marketing and operations
- Proven data-driven marketing strategies to increase clients' ROI

He/She must have experience using a variety of data mining/data analysis methods, using a variety of distributed data platforms, and leveraging the latest open-source technologies. He/She must have a proven ability to drive business results with their data-based insights. Adept at creative and critical thinking, be able to deconstruct problems and transform insights into large scale, state-of-the-art solutions.

**Responsibilities**
- Automate and standardize data processes developed by team members.
- Leverage DevOps to create end-to-end streamline CI/CD data and ML pipelines.
- Review and manage data pipelines, branching, and deployment process.
- Work with partners on requirements and implementation designs of data solutions.
- Implement data quality framework at scale using open-source technologies.
- Create data monitoring dashboards with real-time notifications.
- Unify data engineering and machine learning engineering pipelines.
- Document process, designs, test results, and analysis.
- Ability to articulate complex architectures to non-technical audiences, management, and leadership.
- Continuously research industry best practices and technologies.
- Evangelize end to end automation and standardization across the organization.
- Partner with functional areas, and regional and global teams to leverage the breadth and depth of Visa’s resources.
- This is a hybrid position. Hybrid employees can alternate time between both remote and office. Employees in hybrid roles are expected to work from the office 2-3 set days a week (determined by leadership/site), with a general guidepost of being in the office 50% or more of the time based on business needs._

**Qualifications**:
Basic Qualifications
- BA/BS required, MBA or other relevant Master’s degree preferred (e.g. engineering, computer science, computer engineering, applied mathematics, or other related fields)

Preferred Qualifications
- At least 5 years of experience as data engineer or data scientist with open-source tools.
- Experience in retail banking, payments, financial services, and/or technology industries is a plus. Strong interest in the future of payments is a must.
- Strong technical competency and experience with shell-scripting and Linux systems.
- Experience with CI/CD pipeline using Azure DevOps, GitHub actions, Jenkins, or Airflow.
- Strong coding skills in Spark, Python and SQL to manipulate big data in distributed platforms.
- Good to have experience in navigating in Linux/Unix/Container based apps such as Docker, Kubernetes, or Microservices environments.
- Knowledge in how to leverage AI assistance tools like chatGPT for creating and debugging code.
- Ability to interact with big data clusters using Jupiter Notebooks, terminal, or GUI.
- Demonstrate experience leveraging open-source tools, libraries, and platforms.
- Experience with data visualization and business intelligence tools like Tableau, PowerBI, Microstrategy, or Excel.
- Problem solving ability and process creator with strategic focus on replicability, scalability, innovation, and governance.
- Proficient with git for version control and code collaboration using branches and pull requests.
- Must be passionate about automation and data and able to deliver high quality work.
- Experience developing as part of Agile/Scrum team.
- Fluency in English (spoken/written). Portuguese or Spanish is a plus.

Additional Information

Visa is an EEO Employer. Qualified applicants will receive c
Lo sentimos, este trabajo no está disponible en su región

Big Data Lead

Guadalajara, Jalisco RH CONSULTORA

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

IMPORTANTE EMPRESA DE TECNOLOGIAS DE LA INFORMACIÓN SE ENCUENTRA EN BÚSQUEDA DE:
**BIG DATA LEAD**

**Requisitos**:
Lic o Ingenieria Informatica, Sistemas, o afin

Inglés conversacional y escrito **AVANZADO?**

**Experiência**:
8+ años de experiência en Tecnologías de la Información; 6+ años en desarrollo de Data Warehouse y ETL y 4+ años de sólida experiência en diseño e implementación de una solución totalmente operativa en Snowflake Data Warehouse. Conocimientos profundos de Data Warehousing, conceptos ETL y principios de estructura de modelado.

Excelente comprensión de los aspectos internos de Snowflake y de la integración de Snowflake con otras tecnologías de procesamiento de datos y generación de informes Experiência práctica con las utilidades de Snowflake como SnowSQL, SnowPipe, experiência en la administración de Snowflake, experiência con la carga de datos desde la nube (Azure) y API, etc. Conocimiento en la arquitectura de Snowflake Experiência en SQL es imprescindible. Experiência trabajando con datos semi-estructurados

Experiência con la creación de modelos dbt(data build tool) para snowflake Experiência en componentes de plataforma de ingeniería como Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics Experiência práctica en la implementación de soluciones de inteligencia de datos a gran escala en torno a Snowflake DW Experiência en lenguajes de scripting como Python o Scala. Comprensión del diseño de API RESTful Pasión por las mejores prácticas de la industria y la programación informática

**Ofrecemos**:
Contratacion directa

Esquema 100% nomina

Prestaciones de ley y superiores

Posición: HIBRIDA

Si estas interesado envia tu Cv en inglés por este medio

Tipo de puesto: Tiempo completo

Salario: $70,000.00 - $75,000.00 al mes

Horario:

- Diurno
- Turno de 8 horas

Prestaciones:

- Programa de referidos
- Seguro de gastos médicos mayores
- Seguro de vida
- Vales de despensa

Idioma:

- Inglés (Obligatorio)

Lugar de trabajo: Empleo presencial
Lo sentimos, este trabajo no está disponible en su región

Big Data Lead

Monterrey, Nuevo León RH CONSULTORA

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

IMPORTANTE EMPRESA DE TECNOLOGIAS DE LA INFORMACIÓN SE ENCUENTRA EN BÚSQUEDA DE:
**BIG DATA LEAD**

**Requisitos**:
Lic o Ingenieria Informatica, Sistemas, o afin

Inglés conversacional y escrito **AVANZADO?**

**Experiência**:
8+ años de experiência en Tecnologías de la Información; 6+ años en desarrollo de Data Warehouse y ETL y 4+ años de sólida experiência en diseño e implementación de una solución totalmente operativa en Snowflake Data Warehouse. Conocimientos profundos de Data Warehousing, conceptos ETL y principios de estructura de modelado.

Excelente comprensión de los aspectos internos de Snowflake y de la integración de Snowflake con otras tecnologías de procesamiento de datos y generación de informes Experiência práctica con las utilidades de Snowflake como SnowSQL, SnowPipe, experiência en la administración de Snowflake, experiência con la carga de datos desde la nube (Azure) y API, etc. Conocimiento en la arquitectura de Snowflake Experiência en SQL es imprescindible. Experiência trabajando con datos semi-estructurados

Experiência con la creación de modelos dbt(data build tool) para snowflake Experiência en componentes de plataforma de ingeniería como Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics Experiência práctica en la implementación de soluciones de inteligencia de datos a gran escala en torno a Snowflake DW Experiência en lenguajes de scripting como Python o Scala. Comprensión del diseño de API RESTful Pasión por las mejores prácticas de la industria y la programación informática

**Ofrecemos**:
Contratacion directa

Esquema 100% nomina

Prestaciones de ley y superiores

Posición: HIBRIDA

Si estas interesado envia tu Cv en inglés por este medio

Tipo de puesto: Tiempo completo

Salario: $70,000.00 - $75,000.00 al mes

Horario:

- Diurno
- Turno de 8 horas

Prestaciones:

- Programa de referidos
- Seguro de gastos médicos mayores
- Seguro de vida
- Vales de despensa

Idioma:

- Inglés (Obligatorio)

Lugar de trabajo: Empleo presencial
Lo sentimos, este trabajo no está disponible en su región

Big Data Architect

Guadalajara, Jalisco Caylent

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

We are a fully remote global company with employees in Canada, the United States and Latin America. We celebrate the culture of each of our team members and foster a community of technological curiosity. Come talk to us to learn more about what it means to be a Caylien!

**The Mission**:
At Caylent, a Big Data Architect works as an integral part of a cross-functional delivery team to implement design data management solutions on the AWS cloud for our customers. You will design and document the big data and NoSQL solutions, and provide guidance to the engineers performing the hands-on implementation of your design. You will participate in daily standup meetings with your team and bi-weekly agile ceremonies with the customer. Your manager will have a weekly 1:1 with you to help guide you in your career and make the most of your time at Caylent.

**Your Assignment**:

- Work with a team to deliver top-quality data solutions on AWS for customers
- Participate in daily standup meetings and address technical issues
- Design, optimization and migration of web-scale data processing operations
- Lead and help engineers without any direct supervision

**Your Qualifications**:

- Design and implementation of at least two of these:

- ETL, Orchestration and CI/CD pipelines
- Data Lakes, Data Warehouses
- Analytics and visualization
- Design and implementation of at least two of these:

- Data processing: eg. Hadoop, Spark, EMR
- Streaming/Messaging: eg. Kafka, RabbitMQ, Kinesis
- NoSQL DBs like KeyValue stores, Document Databases, Graph Databases
- Caching: eg. Redis, Memcache
- Search: eg. ElasticSearch, Solr
- Design and implementation of at least one of these:

- Security, access controls and governance on cloud
- Experience with IaC tools such as CloudFormation, CDK, Terraform, and CI/CD tools
- Experience with AWS Glue, Lambda, SDK
- Excellent written and verbal communication skills

**Benefits**:

- 100% remote work
- Medical Insurance for you and eligible dependents
- Generous holidays and flexible PTO
- Competitive phantom equity
- Paid for exams and certifications
- Peer bonus awards
- State of the art laptop and tools
- Equipment & Office Stipend
- Individual professional development plan
- Annual stipend for Learning and Development
- Work with an amazing worldwide team and in an incredible corporate culture

Caylent is a place where everyone belongs. We celebrate diversity and are committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at Caylent.
Lo sentimos, este trabajo no está disponible en su región

Desarrollador Big Data

Tlalpan TH TEC Talento Humano en TI

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Empresa mexicana líder en consultoría en TIC´S y desarrollo de software, solicita un **Desarrollador Big Data**que cubra el siguiente perfil.
- **Actividades**:_
- Crear procedimientos almacenados y vistas
- Optimizar consultas y mejorar el rendimiento
- Validar desarrollos en entorno productivo
- Mantener y actualizar módulos desarrollados
- **Indispensable**:_
- Experiência de **3 años en adelante**:

- Licenciatura o Ingeniería **concluida**:

- **Scala**:

- Integración de Control-M
- **Cloudera**:

- Desarrollo en Java o Python
- Automatización de procesos
- **Apache Spark**:

- Conocimiento en bases de datos SQL

**Aspectos a considerar: 100% presencial**

Tipo de puesto: Tiempo completo

Sueldo: $38,000.00 - $40,000.00 al mes

Tipo de jornada:

- Turno de 8 horas

Puede trasladarse/mudarse:

- Tlalpan, CDMX: Trasladarse al trabajo sin problemas o planear mudarse antes de comenzar a trabajar (Obligatorio)

Pregunta(s) de postulación:

- La modalidad de trabajo para esta vacante es completamente presencial en Tlalpan, CDMX, sin opción a remoto o híbrido. ¿Está de acuerdo con esta exigencia?

Escolaridad:

- Licenciatura terminada (Obligatorio)

Experiência:

- Apache Spark: 3 años (Obligatorio)
- Scala: 3 años (Obligatorio)
- Cloudera: 3 años (Obligatorio)

Lugar de trabajo: Empleo presencial
Lo sentimos, este trabajo no está disponible en su región

Desarrollador de Big Data

Guadalajara, Jalisco Stefanini LATAM

Ayer

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Be a part of Stefanini !


At Stefanini we are more than 30,000 geniuses, connected from more than 40 countries, we co-create a better future.


¡Apply Big Data Engineer !


Requirements:

  • 3 years of years of BIG data development experience.
  • Experienc designing, developing, and operating large-scale data systems running at petabyte scale.
  • Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
  • Able to interact with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
  • Intermediate English


Essential Duties and Responsibilities:


  • Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
  • Experience with Java , Python to write data pipelines and data processing layers
  • Experience in Airflow & Github .
  • Experience in writing map-reduce jobs.
  • Demonstrates expertise in writing complex, highly-optimized queries across large data sets
  • Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
  • Highly Proficient in SQL .
  • Experience with Cloud Technologies (GCP , Azure )
  • Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid )
  • Provides and supports the implementation and operations of the data pipelines and analytical solutions
  • Performance tuning experience of systems working with large data sets
  • Experience in REST API data service – Data Consumption
  • Retail experience is a huge plus.


What’s in for you?

  • Fully remote
  • Training Path
  • Life insurance
  • Punctuality bonus
  • Grocery vouchers
  • Restaurant vouchers
  • Legal benefits + Profit sharing (PTU)
  • Learning and Mentoring platforms
  • Discounts at language schools
  • Gym discount
Lo sentimos, este trabajo no está disponible en su región

Desarrollador de Big Data

Guadalajara, Jalisco Stefanini LATAM

Ayer

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Be a part of Stefanini!

At Stefanini we are more than 30,000 geniuses, connected from more than 40 countries, we co-create a better future.

¡Apply Big Data Engineer!

Requirements:

  • 3 years of years of BIG data development experience.
  • Experienc designing, developing, and operating large-scale data systems running at petabyte scale.
  • Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
  • Able to interact with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
  • Intermediate English

Essential Duties and Responsibilities:

  • Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
  • Experience with Java, Python to write data pipelines and data processing layers
  • Experience in Airflow & Github.
  • Experience in writing map-reduce jobs.
  • Demonstrates expertise in writing complex, highly-optimized queries across large data sets
  • Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
  • Highly Proficient in SQL.
  • Experience with Cloud Technologies (GCP, Azure)
  • Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid)
  • Provides and supports the implementation and operations of the data pipelines and analytical solutions
  • Performance tuning experience of systems working with large data sets
  • Experience in REST API data service – Data Consumption
  • Retail experience is a huge plus.

What’s in for you?

  • Fully remote
  • Training Path
  • Life insurance
  • Punctuality bonus
  • Grocery vouchers
  • Restaurant vouchers
  • Legal benefits + Profit sharing (PTU)
  • Learning and Mentoring platforms
  • Discounts at language schools
  • Gym discount
Lo sentimos, este trabajo no está disponible en su región
Sé el primero en saberlo

Acerca de lo último Analistas de datos Empleos en Mexico !

Desarrollador de Big Data

Guadalajara, Jalisco Stefanini LATAM

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Be a part of Stefanini

At Stefanini we are more than 30,000 geniuses, connected from more than 40 countries, we co-create a better future.

Apply Big Data Engineer

Requirements:

  • 3 years of years of BIG data development experience.
  • Experienc designing, developing, and operating large-scale data systems running at petabyte scale.
  • Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
  • Able to interact with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.
  • Intermediate English

Essential Duties and Responsibilities:

  • Design, develop, implement and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low -latency, and fault-tolerance in every system built.
  • Experience with Java, Python to write data pipelines and data processing layers
  • Experience in Airflow & Github.
  • Experience in writing map-reduce jobs.
  • Demonstrates expertise in writing complex, highly-optimized queries across large data sets
  • Proven, working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
  • Highly Proficient in SQL.
  • Experience with Cloud Technologies (GCP, Azure)
  • Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid)
  • Provides and supports the implementation and operations of the data pipelines and analytical solutions
  • Performance tuning experience of systems working with large data sets
  • Experience in REST API data service – Data Consumption
  • Retail experience is a huge plus.

What's in for you?

  • Fully remote
  • Training Path
  • Life insurance
  • Punctuality bonus
  • Grocery vouchers
  • Restaurant vouchers
  • Legal benefits + Profit sharing (PTU)
  • Learning and Mentoring platforms
  • Discounts at language schools
  • Gym discount
Lo sentimos, este trabajo no está disponible en su región

Big Data Intern Bilingual

HONOR

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Big Data Intern Bilingual - Half Time

*Necesariamente con Seguro Facultativo*

Reports to: Product Center Manager

Location: Polanco, near to Plaza Carso

Job Type: Part-time Internship

Requirements:

1. Student of Data Science, Data Engineering, Data Analysis, Finance or related field.

2. Basic knowledge of Data Analysis or related.

3. Communication and teamwork skills.

What We Offer:

1. Opportunity to gain experience in a creative and dynamic environment.

2. Chance to develop Data Analysis skills and knowledge.

3. Teamwork with experienced professionals in the field.

How to Apply:

If you are interested in this position, please send your resume in English

Pls send updated CV in English

*Necesariamente con Seguro Facultativo*

Lo sentimos, este trabajo no está disponible en su región

Big Data Solutions Engineer

beBeeDataProfessional

Hoy

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Job Description

As a big data professional, you will play a vital role in co-creating a better future by leveraging technology and innovation.

Our team is comprised of 30,000+ professionals from over 40 countries, working together to design, develop, and operate large-scale data systems running at petabyte scale.

  • Mastery in BIG data development with 3 years of experience is required.
  • Proven expertise in designing, developing, and operating large-scale data systems.
  • Experience building real-time data pipelines, enabling streaming analytics, supporting distributed big data, and maintaining machine learning infrastructure.
  • Able to collaborate with engineers, product managers, BI developers, and architects, providing scalable and robust technical solutions.

Key Responsibilities:

  • Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volumes of data; focusing on scalability, low-latency, and fault-tolerance in every system built.
  • Proficiency in Java and Python for writing data pipelines and data processing layers.
  • Experience with Airflow & Github.
  • Expertise in writing map-reduce jobs.
  • Demonstrates expertise in writing complex, highly-optimized queries across large data sets.
  • Proven working expertise with Big Data Technologies Hadoop, Hive, Kafka, Presto, Spark, HBase.
  • Highly proficient in SQL.
  • Experience with Cloud Technologies (GCP, Azure).
  • Experience with relational model, memory data stores desirable (Oracle, Cassandra, Druid).
  • Provides and supports the implementation and operations of data pipelines and analytical solutions.
  • Performance tuning experience of systems working with large data sets.
  • Experience in REST API data service – Data Consumption.
  • Retail experience is a huge plus.
Lo sentimos, este trabajo no está disponible en su región
 

Ubicaciones cercanas

Otros trabajos cerca de mí

Industria

  1. gavelAdministración Pública
  2. workAdministrativo
  3. ecoAgricultura y Silvicultura
  4. restaurantAlimentos y Restaurantes
  5. apartmentArquitectura
  6. paletteArte y Cultura
  7. diversity_3Asistencia Social
  8. directions_carAutomoción
  9. flight_takeoffAviación
  10. account_balanceBanca y Finanzas
  11. spaBelleza y Bienestar
  12. shopping_bagBienes de consumo masivo (FMCG)
  13. point_of_saleComercial y Ventas
  14. shopping_cartComercio Electrónico y Medios Sociales
  15. shopping_cartCompras
  16. constructionConstrucción
  17. supervisor_accountConsultoría de Gestión
  18. person_searchConsultoría de Selección de Personal
  19. request_quoteContabilidad
  20. brushCreativo y Digital
  21. currency_bitcoinCriptomonedas y Blockchain
  22. health_and_safetyCuidado de la Salud
  23. schoolEducación y Formación
  24. boltEnergía
  25. medical_servicesEnfermería
  26. biotechFarmacéutico
  27. manage_accountsGestión
  28. checklist_rtlGestión de Proyectos
  29. child_friendlyGuarderías y Educación Infantil
  30. local_gas_stationHidrocarburos
  31. beach_accessHostelería y Turismo
  32. codeInformática y Software
  33. foundationIngeniería Civil
  34. electrical_servicesIngeniería Eléctrica
  35. precision_manufacturingIngeniería Industrial
  36. buildIngeniería Mecánica
  37. scienceIngeniería Química
  38. handymanInstalación y Mantenimiento
  39. smart_toyInteligencia Artificial y Tecnologías Emergentes
  40. scienceInvestigación y Desarrollo
  41. gavelLegal
  42. clean_handsLimpieza y Saneamiento
  43. inventory_2Logística y Almacenamiento
  44. factoryManufactura y Producción
  45. campaignMarketing
  46. local_hospitalMedicina
  47. perm_mediaMedios y Relaciones Públicas
  48. constructionMinería
  49. sports_soccerOcio y Deportes
  50. medical_servicesOdontología
  51. schoolPrácticas
  52. emoji_eventsRecién Graduados
  53. groupsRecursos Humanos
  54. securitySeguridad de la Información
  55. local_policeSeguridad y Vigilancia
  56. policySeguros
  57. support_agentServicio al Cliente
  58. home_workServicios Inmobiliarios
  59. diversity_3Servicios Sociales
  60. wifiTelecomunicaciones
  61. psychologyTerapia
  62. local_shippingTransporte
  63. storeVenta al por menor
  64. petsVeterinaria
Ver todo Analistas de Datos Empleos