SAS Data Integration Consultant:
We have two plant which we manufacture over 500 items. At both two plants operations run in 2 shifts. Both plant have capability to manufacture all 500 items, but the costs to manufacture is differnt at both plant.
Our senior mangement want to have dshboard that give visibiity:
1. What is made at which plant
2. Costs to manufacture item wise
3. Best production plan that will ensure products are made at the cheapest price
4. Ability to change the plan and see
Experts whi can help, please get in touch.
Data Scientist:
• Resources having 3-5 years of experience in Data Science building predictive models
• Having knowledge of Optimization and Simulation
• Good proficiency in R and Python
• Knowledge of Trade Promotion Optimization would be plus
Experience:
Job location:
Duration:
Joining:
We need help with our inventory planning. More details to be shared. Interested partie please contact
Position: Support Engineer
Must Have Technical Skills: Azure Data Factory
Good to have Technical Skills: Knowledge of sql or hive, python, pyspark, hdfs or adls
Preferred Industry Experience: Manufacturing
Role:
Monitor Batch Pipelines in the Data Analytical Platform and provide workaround to the problems.
Troubleshoot issues, provide workaround, determine root cause, upon failures.
Desired Qualification:
· 0-3 years of DE or batch support experience.
· Must be willing to work completely in the night shift.
· Ability to work on a task independently or with minimal supervision.
· Knowledge of Data Orchestration Tool, preferably ADF.
· Knowledge of sql or hive, python, pyspark, hdfs or adls.
· Work location: Remotely.
Nos of Resources required: 1 to 2
Work location: Remote
Qualification: BE, BTech, MBA, MCA Prefer Comp. Sci. background
Experience: 0-3 Years
Mobilization Period in weeks: 1 week
Duration: 6 to 12 months
Position: Hadoop Admin
Must Have Technical Skills: Hadoop Admin
Good to have Technical Skills: Linux Admin, ETL
· Extensive experience with RedHat Linux and Cloudera is mandatory.
· Experience in installing, configuring, upgrading and managing Hadoop environment.
· Responsible for deployments, and monitor for capacity, performance, and/or troubleshooting issues.
· Work closely with data scientists and data engineers to ensure the smooth operation of the platform.
· End-to-end performance tuning of the clusters.
· Maintains and administers computing environments including computer hardware, systems software, applications software, and all configurations.
· Defines procedures for monitoring and evaluates, diagnoses and establishes work plan to resolve system issues.
· Working knowledge of entire Hadoop ecosystem like HDFS, Hive, Yarn, Oozie, Kafka, Impala, Hive, Kudu, HBase, Spark and Spark Streaming.
· Knowledge of Private and Public cloud computing and virtualization platform.
Nos of Resources required: 2 to 3
Work location: Remote
Qualification: BTech
Experience: 4 yrs – 5 yrs
Mobilization Period in weeks: 1 week
Duration: 3 to 6 months