SAS Data Quality Consultant:
Position: Support Engineer
Must Have Technical Skills: Azure Data Factory
Good to have Technical Skills: Knowledge of sql or hive, python, pyspark, hdfs or adls
Preferred Industry Experience: Manufacturing
Role:
Monitor Batch Pipelines in the Data Analytical Platform and provide workaround to the problems.
Troubleshoot issues, provide workaround, determine root cause, upon failures.
Desired Qualification:
· 0-3 years of DE or batch support experience.
· Must be willing to work completely in the night shift.
· Ability to work on a task independently or with minimal supervision.
· Knowledge of Data Orchestration Tool, preferably ADF.
· Knowledge of sql or hive, python, pyspark, hdfs or adls.
· Work location: Remotely.
Nos of Resources required: 1 to 2
Work location: Remote
Qualification: BE, BTech, MBA, MCA Prefer Comp. Sci. background
Experience: 0-3 Years
Mobilization Period in weeks: 1 week
Duration: 6 to 12 months
Looking for visual representation of our analytics of past 3 years to understand the customer behaviour. We will provide complete access to our analytics account. Do let me know if you need any specific details to present this report. We will provide complete access to our analytics account. Do let me know if you need any specific details to present this report.
Data Scientist:
• Resources having 3-5 years of experience in Data Science building predictive models
• Having knowledge of Optimization and Simulation
• Good proficiency in R and Python
• Knowledge of Trade Promotion Optimization would be plus
Experience:
Job location:
Duration:
Joining:
Position: Azure Pyspark
Pyspark + with Data warehousing and Azure will be add on.
· Must have low level design and development skills. Should able to design a solution for a given use cases.
· Agile delivery- Person must able to show design and code on daily basis
· Must be an experienced PySpark developer and Scala coding. Primary skill is PySpark
· Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling
· Good experience with unit, integration and UAT support
· Able to design and code reusable components and functions
· Should able to review design, code & provide review comments with justification
· Zeal to learn new tool/technologies and adoption
· Good to have experience with Devops and CICD
Nos of Resources required: 1 to 2
Work location: Bangalore
Experience: 8 yrs – 9 yrs
Mobilization Period in weeks: 2 weeks
Position: PySpark Developer
Must Have Technical Skills: PySpark
Good to have Technical Skills: Spark, DWH
· Hand on experience in Pyspark and Numpy, List, Pandas, Graph plotter.
· Excellent knowledge of Python framework and Airflow/Luigi/Dagster/Apache Beam.
· Python skills with all sorts of data libraries, visualization libraries,
· Good SQL knowledge.
· Ability to perform multiple task in continually changing environment.
· Exposure on Hive/HBase/Redis.
· Good knowledge of Data warehousing.
· Reading from Parquet and other format.
· Worked on Hive and HBase.
Nos of Resources required: 3 to 4
Work location: Remote
Qualification: BTech
Experience: 4 yrs – 5 yrs
Mobilization Period in weeks: 1 week
Duration: 3 to 6 months