Atkins has recently defined a “One Atkins 2020” Organisational strategy. To support this strategy, Atkins Group IS (GrIS) has defined an operating model for 2020 to revolutionise the way information services are provided to the organisation, moving from a BAU focus to a Business Value driven organisation. Key concepts include: moving to digital products, adopting agile across the organisation, leveraging cloud, and focusing on supporting bid and delivery.
Atkins Digital Services (ADS) operates within GrIS with a mission acquire, design, develop & deploy digital products that will add value to the business by enabling us to: win work at higher margins, deliver work more efficiently and at lower cost, develop new revenue streams, collaborate & share knowledge more effectively, manage our business and support our people. Supporting ADS is a global development team that delivers applications to both the business and clients.
Atkins Digital Services (ADS) is looking to expand its capability by hiring Big Data Platform Engineer the new role will be part of a multi-disciplinary team that is responsible for new development, feature additions, maintenance and support of a number of key components of Big Data products and Platforms. The data platform will be cloud based, multitenanted and will support several analytical applications ranging from descriptive to predictive and cognitive.
The Big Data Platform Engineer will be responsible for management and administration of all Big data and distributed computing platforms in Atkins Global.
Working collaboratively with ADS colleagues and the wider Group IS team, you will bring creativity, technical knowledge and platform engineering experience that will support the development of new data products,
• Ownership and accountability of the management and administration of the Big Data Platform.
• Manage and own the system upgrade and platform security on the Big data platform
• Plan, Manage and perform platform upgrades, connected ecosystem product upgrade as per roadmap; ensure alignment the new versions made available in open source and technology partner communities.
• Ownership and accountability of designing and implementing proactive capacity management
• Ownership of build, configuration, administration and management of Big data platforms and technologies.
• Ensures all components of the Big data service as a whole (and Hadoop with all ecosystem components in particular) is running normally and to SLA’s
• Troubleshoot cluster & ecosystem product issues as a priority with expertise & lead through to resolution working to SLA’s as defined within the Operational environment
• Troubleshoot network and infrastructure issues with expertise & lead through to resolution working to SLA’s as defined within the Operational environment
• Impact assessment of changes to the platform due to key projects.
• Define and build lightweight (low-overhead) monitoring system characteristics, in real time, and tools for correlating and analysing those statistics for the Big Data infrastructure to ensure good health of the infrastructure.
• Produce and manage advance workload characterization, benchmarks and metrics
• Instil a service mind-set and apply a strong focus on business continuity, Troubleshooting to ensure a good service
• First level go-to person for technical guidance, expertise support to consumer questions on operating services on the platform
• Serve as second level support to Big data operations for infrastructure and component services
• Work with engineering support on upgrades of the cluster participating in the installation, configuration and administration of multi-node Big data platforms
• Implementing new services to agreed SLA’s and ensuring transition to support or standardise services to Global services
• 3-4 years of experience building and managing complex end to end products/solutions with a strong background in IT infrastructure(Cloud Native), large scale enterprise business systems & distributed systems
• Experience developing solutions in cloud (Azure/AWS).
• 3+ years expert hands on experience working with Linux/Unix environment
• 1 year experience in Containerisation technologies like Docker, kubernetes.
• 1-2 yrs experience in managing Hadoop or any other distributed computing platforms/clusters with hundreds of servers
• Experience with running components in Hadoop eco system (Hive, Pig, Ambari, Oozie, Sqoop, Zookeeper, Mahout, Hadoop HDFS, YARN and MapReduce framework internals) and experience of hands on Java or Python.
• Experience building and managing NoSQL Database like HBase.
• Experience with managing security models for Big data platforms (Kerberos, Knox etc.)
• Demonstrable experience in working in a cloud environment
• Have in-depth knowledge of and hands on experience with a majority of the following:
- Distributed File Systems, cluster and parallel computer architectures
- Hadoop distributions (preferably Hortonworks), Map Reduce & Yarn
- Databases (MS SQL, MySQL etc)
- Server Operating systems internals, benchmarking and performance tuning (Linux)
- Multiple technologies including Java, C/C++, Unix Platforms and large scale implementations of programming theories/concepts with proven results.
• Ability to work occasional weekends and varied schedule (e.g. during go-live).
· Contribution to industry/open source communities.
· Experience of working within a multi-cultural, global environment.
· Experience of working in a fast moving and changing large enterprise IT environment.