€¦ · web viewimplementing, managing and administering the overall hadoop infrastructure with...
TRANSCRIPT
Techmahindra Sydney role
Send your responses to [email protected] Hiring Type : Permanent Location: Sydney
Requirement DetailsClient TelcoPosition Title Big Data AdministratorNumber of Position 1Work Location in Australia
Sydney
SubCon/Perm Subcon/PermBudget Open for Right profilesStart Date Expected Immediatecustomer Interview YesDetailed Job Description Job Responsibilities:
Deploying a hadoop cluster, maintaining a hadoop cluster, adding and removing nodes using cluster monitoring tools like Ganglia Nagios or Cloudera Manager, configuring the NameNode high availability and keeping a track of all the running hadoop jobs.
Implementing, managing and administering the overall hadoop infrastructure with various distributions (Cloudera, Hortonworks or IBM BigInsights)
Takes care of the day-to-day running of Hadoop clusters Participate in the architectural discussions, perform system analysis
which involves a review of the existing systems and operating methodologies. Participate in the analysis of latest technologies and suggest the optimal solutions which will be best suited for satisfying the current requirements and will simplify the future modifications
Design and Build the necessary infrastructure for optimal ETL from a variety of data sources
Collaborate with the business to scope requirements.
Mandatory skills/Experience
5 + years of Data Engineering experience working with both distributed architectures, ETL, EDW and Big Data technologies
Extensive experience working with SQL across a variety of databases
Experience working with both structured and unstructured data sources.
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB or similar
Experience with Big Data tools such as Pig, Hive, Impala, Sqoop, Kafka, Flume, Jupitor
Experience with Hadoop, HDFS, Spark, Informatica BDM Experience in Linux administration, Unix Shell Scripts Person will be responsible to Perform Hadoop Administration on
Production/DR Hadoop clusters. Perform Tuning and Increase Operational efficiency on a continuous
basis Monitor health of the platforms and Generate Performance Reports
and Monitor and provide continuous improvements Working closely with development, engineering and operation
teams, jointly work on key deliverables ensuring production scalability and stability
Develop and enhance platform best practices Ensure the Hadoop platform can effectively meet performance & SLA
requirements Responsible for support of Hadoop Production environment which
includes Hive, Ranger, Kerberos, YARN, Spark, SAS, Kafka,base, etc. Perform optimization, capacity planning of a large multi-tenant
cluster. Identify, Implement and continuously enhance the data automation
process
Pl mention synopsis for each of the profile & candidate self-rating as per above table (top of their resume – send only word doc profile.
Pl, send below details while you respond with profiles for a quick process.
Candidate Position/ Agency Total Relevant Current Current Expected Current Lead time to Citizen /
Name Role name Exp Exp Company Salary/rate Sal/rate Location join PR
Name First Name <response>
Surname <response>
Gender <response>
Your mobile number <response>
Your email <response>
Date of birth <response>
Home address <response>
Availability <response>
Rate/hour (Excluding GST including superannuation)
$
Citizenship Status Australian Permanent Resident Work Visa
Referee 1 Name: <response>
Title:
Organisation:
Professionals relationship to you:
Phone:
Mobile:
Email:
Referee 2 Name: <response>
Title:
Organisation:
Professionals relationship to you:
Phone:
Mobile:
Email:
Other Requirements
Send your responses to [email protected]