Apply now »

Big Data Architect

 

Company Information

TE Connectivity Ltd., is a $14 billion global technology and manufacturing leader creating a safer, sustainable, productive, and connected future. For more than 75 years, our connectivity and sensor solutions, proven in the harshest environments, have enabled advancements in transportation, industrial applications, medical technology, energy, data communications, and the home. With 80,000 employees, including more than 8,000 engineers, working alongside customers in approximately 140 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more at www.te.com and on LinkedInFacebookWeChat and Twitter.

Job Overview

The Data Analytics team is looking for a self-motivated, creative and enthusiastic individual to join our team as a Solution Data Architect specializing in Big Data technologies. The desired candidate must have a proven track record working with complex, interrelated systems and bringing that content together on Big Data platforms. The individual will drive the architecture and design of Enterprise Data Warehouse and Big Data solutions. The Solution Data Architect understands how a logical design translates into physical database design for relational stores, how data flows across the enterprise and how the most value can be extracted from the data in the most effective way.  

 

This position is responsible for the architecture and design of the company's Data Platforms, which includes relational data architecture (Oracle, HANA, etc) and non-relational data stores under the Hadoop ecosystem.

Primary Responsibilities

  • Understand current state architecture, including pain points.
  • Understand and, where appropriate, contribute to future state guidelines and vision.
  • Create and document future state architectural options to address specific issues or initiatives.
  • Maintain in-depth and current knowledge of the Cloudera Hadoop ecosystem, its main components (Hive, Impala, Data science workbench, Hive, Talend, Spark, Pig, Sentry, etc), tradeoffs, pros and cons for each, knowledge of when to use each, and how they can be optimally combined to create data patterns and frameworks.
  • Work with both technical and business stakeholders to understand data needs for projects. Attend requirement sessions when necessary, communicating any concerns to project leadership.
  • Perform any of the following as needed: requirement analysis, data profiling, effort estimation for database and data warehouse design and development.
  • Provide thought leadership around the use of industry standard tools and models (including commercially available models and tools) by leveraging experience and current industry trends.
  • Provide guidance on the creation and best practices around Data Governance, Data Stewardship and overall Data Quality initiatives and processes as part of the overall data effort.
  • Develop and maintain conceptual, logical, and physical Hadoop ecosystem, database and data warehouse architectures.
  • Collaborate with the Enterprise Architect, consulting partners and client IT team as warranted to establish and implement strategic initiatives.
  • Make recommendations and assess proposals for database optimization.
  • Identify operational issues and recommend and implement strategies to resolve problems.
  • Provide consulting to the support team when needed

Competencies & Experience Required/Desired

 

Competencies: 

  • 8+ years implementing enterprise Data & Analytical solutions in a complex environment including structured and unstructured data.
  • 3+ years working as a Solution Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models).
  • Multiple experiences implementing Big Data solutions using open source technologies within the Hadoop ecosystem such as: Impala, Hive, Spark, Pig etc.
  • Experience designing and implementing ETL/ELT strategies (Talend, BODS)
  • Experience in Cloudera Data Science Workbench or AWS/Azure platform, with strong curiosity to learn and apply data science techniques. Experience in building data science models is a plus
  • Sound knowledge of Shell Scripting. Experience in Python is a Plus.
  • Ability to collaborate with management, users, contract resources, vendors, and other IT teams
  • Strong problem-solving capabilities. Results oriented. Relies on fact-based logic for decision-making.
  • Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
  • Willing to flex daily work schedule to allow for time-zone differences for global team communications
  • Ability to travel both within and outside of the country for specific projects, as needed
  • Prior working experience with SAP HANA, SAP ERP, SAP BW is a plus
  • Experienced in AWS data lakes and its services like S3, Athena, RDS, EC2, and Redshift
  • Having functional knowledge in one of the areas is a plus (Sales/Inventory/Purchasing and etc)
  • Strong interpersonal and communication skills

 

Educational Required/Desired:

  • Degree in Management Information Systems, Computer Science OR equivalent work experience in an IT organization

Location: 

Berwyn, PA, US, 19312

Alternative Locations:  Middletown
Travel:  10% to 25%
Requisition ID:  46238


Nearest Major Market: Philadelphia

Job Segment: Architecture, Database, Warehouse, Developer, ERP, Technology, Engineering, Manufacturing

Apply now »