Hire an ideal data scientist for your project
  • 300+
    Data Scientists
  • 475+
    Users
Related Work
Fixed Price - Est. Time: 12 months,

Position: Support Engineer

Must Have Technical Skills: Azure Data Factory

Good to have Technical Skills: Knowledge of sql or hive, python, pyspark, hdfs or adls

Preferred Industry Experience: Manufacturing

 Role:

Monitor Batch Pipelines in the Data Analytical Platform and provide workaround to the problems.

Troubleshoot issues, provide workaround, determine root cause, upon failures.

Desired Qualification:

·         0-3 years of DE or batch support experience.

·         Must be willing to work completely in the night shift.

·         Ability to work on a task independently or with minimal supervision.

·         Knowledge of Data Orchestration Tool, preferably ADF.

·         Knowledge of sql or hive, python, pyspark, hdfs or adls.

·         Work location: Remotely.

Nos of Resources required: 1 to 2

Work location: Remote

Qualification: BE, BTech, MBA, MCA Prefer Comp. Sci. background

Experience: 0-3 Years

Mobilization Period in weeks: 1 week

Duration: 6 to 12 months  

Fixed Price - Est. Time: 12 months,

SAS Analytics Consultant: 

  • Identifies trends, patterns, and critical business insights using SAS
  • Analytical Model Development like-Predictive Modelling /Logistics Regression/ Linear Regression/ Forecasting Modelling/ Time Series Etc.
  • Manage all data for specific modules of analytical projects
  • Contribute to building the analytical construct directed towards solving the business problem
  • Work with the business team to specify analysis tasks to generate insights
  • Design and execute the analysis tasks
  • No of postion-2
  • Role & Responsibilities - Dev and Testing Skills
  • Experience - 3-6 years of experience
  • Location – Delhi NCR
Please Select..
Sample project
Fixed Price - Est. Time: 12 months,

Position: Azure Pyspark

Pyspark + with Data warehousing and Azure will be add on.

·         Must have low level design and development skills.  Should able to design a solution for a given use cases. 

·         Agile delivery- Person must able to show design and code on daily basis

·         Must be an experienced PySpark developer and Scala coding.   Primary skill is PySpark

·         Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling

·         Good experience with unit, integration and UAT support

·         Able to design and code reusable components and functions

·         Should able to review design, code & provide review comments with justification

·         Zeal to learn new tool/technologies and adoption

·         Good to have experience with Devops and CICD

Nos of Resources required: 1 to 2

Work location: Bangalore

Experience: 8 yrs – 9 yrs

Mobilization Period in weeks: 2 weeks

Fixed Price - Est. Time: 3 months,

Data Scientist:

• Resources having 3-5 years of experience in Data Science building predictive models
• Having knowledge of Optimization and Simulation
• Good proficiency in R and Python
• Knowledge of Trade Promotion Optimization would be plus

Experience:

  • 3 to 5 years

Job location:

  • Bangalore/Chennai/ Hyderabad (WFH at the moment)

Duration:

  • 3 months. May get extended.

Joining:

  • immediate
Fixed Price - Est. Time: 6 months,

Position: Hadoop Admin

Must Have Technical Skills: Hadoop Admin

Good to have Technical Skills: Linux Admin, ETL

·         Extensive experience with RedHat Linux and Cloudera is mandatory.

·         Experience in installing, configuring, upgrading and managing Hadoop environment.

·         Responsible for deployments, and monitor for capacity, performance, and/or troubleshooting issues.

·         Work closely with data scientists and data engineers to ensure the smooth operation of the platform.

·         End-to-end performance tuning of the clusters.

·         Maintains and administers computing environments including computer hardware, systems software, applications software, and all configurations.

·         Defines procedures for monitoring and evaluates, diagnoses and establishes work plan to resolve system issues.

·         Working knowledge of entire Hadoop ecosystem like HDFS, Hive, Yarn, Oozie, Kafka, Impala, Hive, Kudu, HBase, Spark and Spark Streaming.

·         Knowledge of Private and Public cloud computing and virtualization platform.

Nos of Resources required: 2 to 3

Work location: Remote

Qualification: BTech

Experience: 4 yrs – 5 yrs

Mobilization Period in weeks: 1 week

Duration: 3 to 6 months