show all

hide all

  • AWS ARCHITECT

    JOB DESCRIPTION:

    DATALENZ is growing! We are expanding our team and actively recruiting for our growing AWS practice area. Here are a few reasons why to consider us for your next career move:

    You want to be on the leading edge of a monumental change in IT You enjoy venturing into new territory and think of yourself as a builder You appreciate the balance of a dynamic and entrepreneurial culture led by an experienced management team You want to contribute to the success of a growing company You are committed to the success of clients and your colleagues We are the cloud application and infrastructure experts behind some of the world's most advanced cloud computing initiatives. We're not just learning a new way of doing things - we're defining the best way to do them. We are innovative, disciplined, passionate and creative individuals who stay ahead of the technology curve and love what we do. We are building a great company by doing work that matters delivering best practices, solutions and methodologies to accelerate our clients' cloud transformations.

    As part of our professional service offerings to streamline enterprise adoption of Amazon Web Services (AWS), our AWS-specific services help companies migrate, develop or modernize applications on AWS and manage AWS production and dev/test/prod-ops environments.

    Desired Skills and Experience

    • Bachelor's Degree in Computer Science, other technical field
    • Well versed in building product-quality software on AWS including experience in designing for high availability, building multi-zone and multi-region architectures, and designing across appropriate SQL and NoSQL data layer technologies
    • Familiar with various application stacks such as Java, C#, .Net, etc.
    • Configuration and deployment experience in two or more of the following:
    • AWS apps technologies such as RDS, ElasticBeanstalk, DynamoDB, RedShift
    • AWS IaaS Technologies such as EC2, S3, EBS, ELB, VPC, Route 53,
    • Deployed applications with Web UI frontends
    • Deployed application with RESTful/SOAP services interfaces
    • Experience building private AMI's on VPC's
    • AWS certification in any of the following - Solutions Architect, Developer or Systems Ops - a HUGE plus!
    • Firm grasp on cloud security, leveraging Linux and Windows operating systems, using the AWS console and CLI (command line interface)
    • Experience as a hands-on technical practitioner/specialist in client facing roles in large enterprises and demonstrated client facing consulting skills, including building strong client relationships
    • Excellent verbal, presentation and written communications skills.
    • Strong team skills including the ability to lead and be a team player

  • Hadoop Administrator

    JOB DESCRIPTION:

    Responsibilities :

    • Projects excellent command of business knowledge and has the ability to solicit input from internal and external sources.
    • Responsible for small-to-medium scale projects and delivers presentations with minimal supervision.
    • Is recognized as a positive leader and frequently provides feedback and strategic recommendations to management.
    • Affects change within sphere of influence and leads development of innovative improvements.
    • Implements complex medium- to large-scale projects and has expertise in project management software/process.
    • May be responsible for daily operations during shift by setting priorities and assigning or adjusting workloads.


    Role should include as its primary duty either:

    • The exercise of discretion and independent judgment with respect to matters of significance.
    • Design work (Design means a tailored design in order to meet a specific client request) or
    • Programming (Programming means writing original programs using computer code that is intended for use on a wide scale basis) This position should typically be used for an advanced or lead level resource.
    • Manage scalable Hadoop cluster environments.
    • Manage the backup and disaster recovery for Hadoop data.
    • Optimize and tune the Hadoop environments to meet performance requirements.
    • Install and configure monitoring tools.
    • Work with big data developers and developers designing scalable supportable infrastructure.
    • Work with Linux server admin team in administering the server hardware and operating system
    • Assist with develop and maintain the system runbooks.
    • Create and publish various production metrics including system performance and reliability information to systems owners and management.
    • Perform ongoing capacity management forecasts including timing and budget considerations.
    • Coordinate root cause analysis (RCA) efforts to minimize future system issues. Mentor, develop and train junior staff members as needed.
    • Provide off hours support on a rotational basis.