Hire an ideal data scientist for your project
  • 300+
    Data Scientists
  • 475+
    Users
Related Work
Fixed Price - Est. Time: 2 months,

We are desirous of increasing the conversions amongst the unique visitors that come to our website each day. We capture basic prospecting data about our visitors and would like to leverage that data to increase our leads/sales. We are looking at agencies that could help us with a tangible improvement with a short term project. Subsequently, we could convert this into long term retainership based contract.  

Fixed Price - Est. Time: 3 months,

Web Scrapping Programmers:

·        As a Python Developer, your role is to apply your knowledge set to fetch data from multiple online sources, cleanse it and build APIs on top of it

·        Develop a deep understanding of our vast data sources on the web and know exactly how, when, and which data to scrap, parse and store this data

·        Work closely with Database Administrators to store data in SQL and NoSQL databases

·        Develop frameworks for automating and maintaining the constant flow of data from multiple sources

·        Work independently with little supervision to research and test innovative solutions


Skills and Qualifications:

·        Strong coding experience in Python (knowledge of Java, Javascript is a plus)

·        Experience with SQL databases

·        Experience with multi-processing, multi-threading and AWS/Azure

·        Strong knowledge of scraping frameworks such as Python(Request, BeautifulSoup), Web-Harvest and others

·        Previous experience with web crawling is a must

Experience:

  • 1 to 2 years

Job location:

  • Bangalore/Chennai/ Hyderabad

Duration:

  • 3 months. May get extended.

Joining:

  • immediate
Fixed Price - Est. Time: 3 months,

Power BI and Reporting:

1) Proficient in Building the PowerBI dashboards, SSAS Cubes and Data warehouse solutions, using Microsoft BI tools and with industry best practices.

2) Understand and deliver the business scenarios for improvement on the BI platform from a clients perspective

3) Manage interaction and expectation of client considering all the aspects of building long term sustainable solution

4) Can create amazing dashboard designs, SSAS models, data models on the database, suggest ETL data flow and collectively support team for developments

5) Should manage the onsite and offshore assignments

6) Good knowledge of data warehouse modeling

7) Report and support to management for day to day activities and deliveries

8) Maker-Checker approach to follow in the development

Experience:

  • 2 to 3 years

Job location:

  • Bangalore/Chennai/ Hyderabad (WFH at the moment)

Duration:

  • 3 months. May get extended.

Joining:

  • immediate
Fixed Price - Est. Time: 12 months,

Position: Azure Pyspark

Pyspark + with Data warehousing and Azure will be add on.

·         Must have low level design and development skills.  Should able to design a solution for a given use cases. 

·         Agile delivery- Person must able to show design and code on daily basis

·         Must be an experienced PySpark developer and Scala coding.   Primary skill is PySpark

·         Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling

·         Good experience with unit, integration and UAT support

·         Able to design and code reusable components and functions

·         Should able to review design, code & provide review comments with justification

·         Zeal to learn new tool/technologies and adoption

·         Good to have experience with Devops and CICD

Nos of Resources required: 1 to 2

Work location: Bangalore

Experience: 8 yrs – 9 yrs

Mobilization Period in weeks: 2 weeks

Fixed Price - Est. Time: 6 months,

Position: Hadoop Admin

Must Have Technical Skills: Hadoop Admin

Good to have Technical Skills: Linux Admin, ETL

·         Extensive experience with RedHat Linux and Cloudera is mandatory.

·         Experience in installing, configuring, upgrading and managing Hadoop environment.

·         Responsible for deployments, and monitor for capacity, performance, and/or troubleshooting issues.

·         Work closely with data scientists and data engineers to ensure the smooth operation of the platform.

·         End-to-end performance tuning of the clusters.

·         Maintains and administers computing environments including computer hardware, systems software, applications software, and all configurations.

·         Defines procedures for monitoring and evaluates, diagnoses and establishes work plan to resolve system issues.

·         Working knowledge of entire Hadoop ecosystem like HDFS, Hive, Yarn, Oozie, Kafka, Impala, Hive, Kudu, HBase, Spark and Spark Streaming.

·         Knowledge of Private and Public cloud computing and virtualization platform.

Nos of Resources required: 2 to 3

Work location: Remote

Qualification: BTech

Experience: 4 yrs – 5 yrs

Mobilization Period in weeks: 1 week

Duration: 3 to 6 months  

Fixed Price - Est. Time: 12 months,

Position: Support Engineer

Must Have Technical Skills: Azure Data Factory

Good to have Technical Skills: Knowledge of sql or hive, python, pyspark, hdfs or adls

Preferred Industry Experience: Manufacturing

 Role:

Monitor Batch Pipelines in the Data Analytical Platform and provide workaround to the problems.

Troubleshoot issues, provide workaround, determine root cause, upon failures.

Desired Qualification:

·         0-3 years of DE or batch support experience.

·         Must be willing to work completely in the night shift.

·         Ability to work on a task independently or with minimal supervision.

·         Knowledge of Data Orchestration Tool, preferably ADF.

·         Knowledge of sql or hive, python, pyspark, hdfs or adls.

·         Work location: Remotely.

Nos of Resources required: 1 to 2

Work location: Remote

Qualification: BE, BTech, MBA, MCA Prefer Comp. Sci. background

Experience: 0-3 Years

Mobilization Period in weeks: 1 week

Duration: 6 to 12 months