3-4 years of building and operationalizing large scale enterprise data solutions, Data Lakes and applications using one or more of AWS data and analytics services in combination with 3rd parties
* EC2, EMR, S3, Kinesis, Dynamo DB, RedShift, RDS, Lambda, Glue, Spark, Snowflake etc.
* 3rd party KMS, HSM with AWS data services for building secure data solutions
* Hadoop stack (HDFS, PIG, Hive, Spark, Ambari, SQOOP, Map reduce, Tez, Ranger etc..)
* Experience in Data Lake management tools Like Podium Data, Diyotta, Informatica BDE etc. is a plus
* Minimum 3 years of hands-on experience analyzing, re-architecting and re-platforming on premise data Lake to data platforms on AWS cloud using AWS/3rd party services
* Minimum 3 years of designing and building production data pipelines from ingestion to consumption within a hybrid big data architecture, using Podium, Java, Python, Scala, C++ etc.
* Minimum 3 years architecting and implementing next generation data and analytics platforms on AWS cloud serving analytics and BI application integrations
* Hands-on AWS experience with a minimum of 3 year of solution design, build and implemented at production scale
* 5-8 Years of Demonstrated knowledge and application of ETL Data Warehousing best practices
* 5-8 Years of Experience with SQL against relational databases preferably with SQL Server, Oracle database platforms 10g and above on Linux/Unix.
* 1-2 year*s exposure to Logi, SAS, Tableau, R or other dash boarding reporting experience is a plus