Job Description for Technical Analyst
Required Skill Set:
1. Highly technical and analytical, possessing 3+ years of hands-on experience in designing real-world big data applications.
2. Strong proficiency in programming with Python and PySpark
3. 3+ years of Hands on experience in big data processing frameworks such as Apache Spark.
4. Thorough understanding of Sparks RDD and Dataframe APIs and expertise in monitoring, troubleshooting and tuning the performance of Spark jobs.
5. In-depth knowledge on Amazon Web Services cloud and their various services such as, EMR, EC2, RDS, Redshift, EBS, and S3.
6. Knowledge of monitoring, logging and cost management tools that integrate with AWS.
7. Strong verbal and written communications skills.
8. Healthcare Domain experience or experience working with longitudinal transaction data is preferred.
Roles & Responsibilities:
1. Design and implement high performance ETL pipelines using Apache Spark on AWS EMR.
2. Write reusable and efficient code, test and tune the performance of Spark scripts.
3. Work closely with the team to understand and incorporate requirements in the application architecture.
4. Ensure overall quality and timely delivery of the work.