Hire an ideal data scientist for your project
  • 300+
    Data Scientists
  • 475+
    Users
Related Work
Fixed Price - Est. Time: 6 months,

Our Client, A leading Finance company is looking to augment in their team with appropriate candidates with following skills. 

Job Description

  1. 3-5 yrs of experience in Cognos 11.x,10.x
  2. Good knowledge of SQL
  3. Knowledge on creating reports(crosstab, list) in report studio
  4. Hands on experience on Cognos framework manager
  5. Hands on experience on Cognos Admin related activities    
  6. Sound knowledge of data warehousing concept
  7. Budget is just a numbers.

Location: Delhi NCR region. (Onsite)

Contract period: Min 6 months 

We are available over chat to address any queries.

Fixed Price - Est. Time: 6 months,

Position: PySpark Developer

Must Have Technical Skills: PySpark

Good to have Technical Skills: Spark, DWH

·         Hand on experience in Pyspark and Numpy, List, Pandas, Graph plotter.

·         Excellent knowledge of Python framework and Airflow/Luigi/Dagster/Apache Beam.

·         Python skills with all sorts of data libraries, visualization libraries,

·         Good SQL knowledge.

·         Ability to perform multiple task in continually changing environment.

·         Exposure on Hive/HBase/Redis.

·         Good knowledge of Data warehousing.

·         Reading from Parquet and other format.

·         Worked on Hive and HBase.

 

Nos of Resources required: 3 to 4

Work location: Remote

Qualification: BTech

Experience: 4 yrs – 5 yrs

Mobilization Period in weeks: 1 week

Duration: 3 to 6 months  

Fixed Price - Est. Time: 6 months,

Position: Hadoop Admin

Must Have Technical Skills: Hadoop Admin

Good to have Technical Skills: Linux Admin, ETL

·         Extensive experience with RedHat Linux and Cloudera is mandatory.

·         Experience in installing, configuring, upgrading and managing Hadoop environment.

·         Responsible for deployments, and monitor for capacity, performance, and/or troubleshooting issues.

·         Work closely with data scientists and data engineers to ensure the smooth operation of the platform.

·         End-to-end performance tuning of the clusters.

·         Maintains and administers computing environments including computer hardware, systems software, applications software, and all configurations.

·         Defines procedures for monitoring and evaluates, diagnoses and establishes work plan to resolve system issues.

·         Working knowledge of entire Hadoop ecosystem like HDFS, Hive, Yarn, Oozie, Kafka, Impala, Hive, Kudu, HBase, Spark and Spark Streaming.

·         Knowledge of Private and Public cloud computing and virtualization platform.

Nos of Resources required: 2 to 3

Work location: Remote

Qualification: BTech

Experience: 4 yrs – 5 yrs

Mobilization Period in weeks: 1 week

Duration: 3 to 6 months  

Fixed Price - Est. Time: 12,

We need a proficient PySpark resource with good experience.

Please Select..
For ubbjk
Fixed Price - Est. Time: 1,
React web app.