Data Engineer – Fortune 100 Multinational Technology Conglomerate

Contract Blockgram
San Jose, CA, US

Apply with A resume

Blockgram’s client is hiring 4 Data Engineers to join their growing local team in San Jose, California. These are all 12-month minimum contract roles with the potential for long-term growth at one of the largest Fortune 100 companies in the world. We are looking for a tenacious, passionate software developer with a specific focus on data engineering to join their data engineering and business analytics team.

These Data Engineers should have exceptional SQL experience and 4+ years of experience is more than sufficient. The candidate must have hands-on experience in a Linux environment with Hadoop/Apache Spark and must have worked with SQL 8/10, in HiveQL. Also, the Data Engineers should have strong knowledge of Google Cloud and have direct Automated Data Quality pipeline experience as well as strong data analysis and troubleshooting skills with an eye towards producing quality data products.

In this role, you’ll be responsible for bringing data into the platform, transforming it into a well-defined, consistent model, moving it to the best data stores to support API and analytics use cases, and making it easy for applications and consumers to access the data. If you’re eager in working with the latest emerging data-driven technology and having the opportunity of utilizing powerful business intelligence (BI) and data visualization tools, this role is definitely for you!

Responsibilities
  • Support data applications, ad-hoc analysis, and BI systems used for in-depth analytics to identify actionable insights that will influence the direction of the business
  • Analyze requirements to design, develop and test interactive data solutions
  • Develop large-scale systems with high-speed and low-latency data solutions
  • Be well-versed in ETL of large-scale data, low-latency queries, real-time reporting, and APIs for data access
  • Develop APIs and write code to transform and enrich data in both batch and streaming scenarios
  • Build end-to-end reporting solutions from multiple data structures and sources

Requirements
  • BS in Computer Science, Information Systems or related discipline
  • 2+ years of experience in data engineering and building large-scale data platforms
  • 2+ years of experience in SQL to discover, aggregate and extract data
  • 2+ years experience with Hadoop, Pig, Hive, Spark, Storm, and other BIG data technologies
  • 2+ years of experience with data visualization tools, such as Tableau and Kibana
  • Experience with databases such as MS SQL Server and MySQL
  • Experience in coding in Java, Python, Node.js, or other object-oriented languages
  • Solid Linux and Windows administration skills, and understanding of system performance
  • Strong interpersonal and communication skills, flexibility, commitment to the team, and a positive attitude
  • Experience with data in the SaaS/subscription space is a plus
  • Experience with Apache Beam is a plus
  • Knowledge of Cassandra or other distributed data stores (Redis, MongoDB, MemCache, etc.) is a plus
  • Experience building CI/CD and server/deployment automation solutions is a plus