Big Data Cloud Engineer
Employment Type: Full-Time
Industry: Information Technology
Job ID T4337 - Big Data Cloud Engineer Based in Hartford, CT once the WFH Mandate is lifted - No other locations - We are beginning our Cloud Journey with first deployments into our Production Public Cloud Environment by end of 2020. We have very limited Public Cloud (AWS) experience on the existing BIA team and need to seed the team with an experienced professional that would compliment deep knowledge in our existing data platform systems (Teradata Big Data) Person hired will provide administration and platform support to our Development teams as new Cloud applications are deployed. Person would ensure code and access to data is optimized, minimizing our monthly AWS chargeback costs This position is responsible for having a high level of knowledge and extensive hands on experience designing, structuring, optimizing, implementing, and maintaining public cloud (AWS) infrastructures and Big Data Environments. The position is on our PI Technology Business Intelligence Analytics Data Platforms support team which maintain both on-prem (Teradata and Big Data) and off-prem (AWS) environments The person hired will administer and provide technical guidance of our Data Platforms, providing Environment standards and Development Best Practice as we migrate workloads from our on-prem environments to the AWS public cloud environment. Person hired will implement and maintain AWS solutions using EC2, S3, RDS, Redshift, Snowflake, Application Load Balancer, Cloud Watch, andor other supporting cloud technologies. Person hired will build environments using automation and configuration management tools (e.g. Cloud FormationTerraform and AnsiblePuppet) Proficiency with AWS and general cloud infrastructure principles is required. Experience deploying software applications to production using tools such as Git, Docker, Kubernetes, CICD servers, serverless functions (AWS Lambda, GCP Cloud Functions, etc.) Proficiency with a programming language such as PythonPySpark and SQL along with solid general software engineering principles are required. Experience with application performance management and observation tools (i.e. Splunk) is preferred.
Loading some great jobs for you...