Sabiha Redmond

Angestellt, Big Data Engineer, DevsData LLC

Auckland, Neuseeland

Werdegang

Berufserfahrung von Sabiha Redmond

  • Bis heute 5 Jahre und 9 Monate, seit Okt. 2018

    Big Data Engineer

    DevsData LLC

    • Analyzed large and critical datasets using Cloudera, HDFS, HBase, MapReduce, Hive, Hive UDF, Pig, Sqoop, Zookeeper and Spark. • Developed Spark Applications by using Scala, Java and Implemented Apache Spark data processing project to handle data from various RDBMS and Streaming sources. • Worked with the Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Spark MLlib, Data Frame, Pair RDD's, Spark YARN.

  • 2 Jahre und 2 Monate, Juni 2016 - Juli 2018

    Big Data Specialist

    DevSecOps Academy

    • Primary responsibilities include building scalable distributed data solutions using Hadoop ecosystem. • Experienced in designing and deployment of Hadoop cluster and different big data analytic tools including Pig, Hive, Flume, Hbase and Sqoop. • Imported weblogs and unstructured data using the Apache Flume and store it in Flume channel. • Loaded the CDRs from relational DB using Sqoop and other sources to Hadoop cluster by Flume. • Developed business logic in Flume interceptor in Java.

  • 1 Jahr und 5 Monate, Jan. 2015 - Mai 2016

    Hadoop Developer

    IFC

    • Analyzing Hadoop cluster and different big data analytic tools including Pig, HBase and Sqoop. • Worked with Linux systems and RDBMS database on a regular basis in order to ingest data using Sqoop.

  • 2 Jahre und 9 Monate, Apr. 2013 - Dez. 2015

    Big Data Developer

    Geodis

    • Implemented Kafka consumers for HDFS and Spark Streaming • Utilized SQOOP, Kafka, Flume and Hadoop File System API’s for implementing data ingestion pipelines from heterogenous data Sources • Created storage with Amazon S3 for storing data. Worked on transferring data from Kafka topic into AWS S3 storage.

  • 2 Jahre und 9 Monate, Apr. 2013 - Dez. 2015

    Big Data Developer

    Geodis

    • Implemented Kafka consumers for HDFS and Spark Streaming • Utilized SQOOP, Kafka, Flume and Hadoop File System API’s for implementing data ingestion pipelines from heterogenous data Sources • Created storage with Amazon S3 for storing data. Worked on transferring data from Kafka topic into AWS S3 storage.

21 Mio. XING Mitglieder, von A bis Z