We are desirous of increasing the conversions amongst the unique visitors that come to our website each day. We capture basic prospecting data about our visitors and would like to leverage that data to increase our leads/sales. We are looking at agencies that could help us with a tangible improvement with a short term project. Subsequently, we could convert this into long term retainership based contract.
Our Client, A leading Finance company is looking to augment in their team with appropriate candidates with following skills.
Job Description
Location: Delhi NCR region. (Onsite)
Contract period: Min 6 months
We are available over chat to address any queries.
Position: PySpark Developer
Must Have Technical Skills: PySpark
Good to have Technical Skills: Spark, DWH
· Hand on experience in Pyspark and Numpy, List, Pandas, Graph plotter.
· Excellent knowledge of Python framework and Airflow/Luigi/Dagster/Apache Beam.
· Python skills with all sorts of data libraries, visualization libraries,
· Good SQL knowledge.
· Ability to perform multiple task in continually changing environment.
· Exposure on Hive/HBase/Redis.
· Good knowledge of Data warehousing.
· Reading from Parquet and other format.
· Worked on Hive and HBase.
Nos of Resources required: 3 to 4
Work location: Remote
Qualification: BTech
Experience: 4 yrs – 5 yrs
Mobilization Period in weeks: 1 week
Duration: 3 to 6 months
Visualization expert:
· Resources having 3-5 years of experience creating user interfaces using Power BI and R-Shiny
· Have basic Analytics concepts like Predictive modelling, Optimization and Simulation
· Knowledge of Python would be desirable
· Knowledge of Trade Promotion Optimization would be plus
· Excellent written and verbal communication.
· Good organizational skills.
· Ability to work as part of a team.
Experience:
Job location:
Duration:
Joining:
Position: Azure Pyspark
Pyspark + with Data warehousing and Azure will be add on.
· Must have low level design and development skills. Should able to design a solution for a given use cases.
· Agile delivery- Person must able to show design and code on daily basis
· Must be an experienced PySpark developer and Scala coding. Primary skill is PySpark
· Must have experience in designing job orchestration, sequence, metadata design, Audit trail, dynamic parameter passing and error/exception handling
· Good experience with unit, integration and UAT support
· Able to design and code reusable components and functions
· Should able to review design, code & provide review comments with justification
· Zeal to learn new tool/technologies and adoption
· Good to have experience with Devops and CICD
Nos of Resources required: 1 to 2
Work location: Bangalore
Experience: 8 yrs – 9 yrs
Mobilization Period in weeks: 2 weeks
Position: Hadoop Admin
Must Have Technical Skills: Hadoop Admin
Good to have Technical Skills: Linux Admin, ETL
· Extensive experience with RedHat Linux and Cloudera is mandatory.
· Experience in installing, configuring, upgrading and managing Hadoop environment.
· Responsible for deployments, and monitor for capacity, performance, and/or troubleshooting issues.
· Work closely with data scientists and data engineers to ensure the smooth operation of the platform.
· End-to-end performance tuning of the clusters.
· Maintains and administers computing environments including computer hardware, systems software, applications software, and all configurations.
· Defines procedures for monitoring and evaluates, diagnoses and establishes work plan to resolve system issues.
· Working knowledge of entire Hadoop ecosystem like HDFS, Hive, Yarn, Oozie, Kafka, Impala, Hive, Kudu, HBase, Spark and Spark Streaming.
· Knowledge of Private and Public cloud computing and virtualization platform.
Nos of Resources required: 2 to 3
Work location: Remote
Qualification: BTech
Experience: 4 yrs – 5 yrs
Mobilization Period in weeks: 1 week
Duration: 3 to 6 months