9 : remote jobs for you

Full Stack Software Engineer

DEVEXI is looking for a Senior Full Stack Software Engineer to join our team.

DEVEXI is an early stage healthcare analytics startup building a powerful, sophisticated yet intuitive longitudinal research data platform linking medical and dental data to enable researchers to connect the dots between diagnoses, treatments, prescribed drugs, exposures, and short and long-term health outcomes – for groundbreaking, longitudinal studies never before possible.
DEVEXI will enable health and medical researchers, universities, teaching hospitals, insurance payers, government health agencies and pharmaceutical companies to improve quality of health care delivery, identify best practices and increase successful, cost-effective outcomes.

Culture Fit

  • Passionate about Java and Big Data SQL Databases
  • Able to work effectively as part of a remote team
  • Be friendly
  • Be a self-starter
  • Be smart
  • Able to prioritize and context switch when necessary to achieve the bigger vision
  • Able to convey development concepts to both technical and non-technical audiences
  • Strong foundation in computer science: data structures, algorithms, and software design patterns

Skills Required

● Back End

     ○ Java
          ■ Experience building RESTful web services with Jersey
          ■ Experience with Guice & IoC
          ■ Bonus points for experience with Dropwizard
          ■ Bonus points for experience with Flyway or Liquibase  
          ■ Understand the Functional and Streaming enhancements in Java8
          ■ Expertise with Unit Testing and mocking frameworks (JUnit / Mockito)

     ○ SQL & Data Warehousing

         ■ Mastery of SQL with complex joins and aggregation clauses
         ■ Efficient ETL
         ■ Understand SQL schema versioning and migration
         ■ Bonus for experience manipulating Snowflake schemas
         ■ Bonus points for experience with AWS Redshift

● Front End

     ○ HTML / CSS / SASS / Javascript
     ○ AngularJS
     ○ Bower / Gulp
     ○ Build well-architected AngularJS implementations of wireframes
     ○ Bonus points for PhantomJS & experience with FE test frameworks

● Experience with:

    ○ Agile/Scrum
    ○ Git (Bitbucket)
    ○ Amazon Web Services including EC2 and S3
    ○ Continuous Integration
    ○ Docker

● Bachelor’s Degree in Computer Science or equivalent industry experience

Benefits

● Involvement in big data health analytics to enable groundbreaking longitudinal research
● Full-time remote work (1099 contract)
● Excellent work/life balance
● No travel required
● Dog-friendly workplace
● Work with a quality team of professionals

Please send resume, hourly rate, and availability to jobs+developer@devexi.com. Must be a U.S. Citizen.
  • 1 week ago
  • Devexi

Data Engineer

Kombucha, cold brew coffee, foosball? We've got it.
Talented, creative, hard-working? We're looking for you. 

Full-Time Data Engineer Role (with a side of Kubernetes Ops)
U.S. Based - Remote Available

Dev Team Overview
Scientist.com is a growing services marketplace which helps Scientists the world-over find, initiate, and track service requests. We're at many of the largest pharmaceutical and biotech institutions in the world. We enable the outsourced workflows which increase efficiencies and facilitate compliance. And we're growing.
The core of the Scientist system is a mature and monolithic Ruby on Rails application. We have successfully migrated to Kubernetes on AWS and we run over 400 pods! Everyone on the dev team is empowered to deliver new software daily.

Job Description & Responsibilities
This is where you come in, the web application is in great shape but we need to answer more business questions. We want to provide data and tools to our finance and business analysts based on events from the application. The first big project would be maintaining an ETL pipeline for loading data into AWS S3 and supporting our internal customers with Tableau.
You would be collaborating with experienced application owners who know the datas ins and outs, you would not be learning the system in a vacuum. That won't be the end of it, we'd also like to support more developer friendly tools like Jupyter.
You'd also be supporting the Kubernetes cluster, helping to track system health, improving our CI/CD system, internal metrics and logging, performing cluster upgrades, and doing a variety of security ops. If you like Rust, this also a great job for you.
We're ready to train the right applicant for any of the missing skills. If you've got a passion for any part of this job and you're receptive to training on other parts, apply!
Requirements
  • Code school or BA/BS degree or equivalent work experience
  • Working knowledge of Unix processes, networking, bash, ruby or python
  • Comfortable with SQL and data modeling
  • Excellent communication and presentation skills
  • Self-starter capable of working independently
Nice to haves
  • Experience with AWS Athena/Glue/EMR
  • Experience with Tableau and its server administration
Even if you aren't super confident, that's OK. We encourage anyone interested in this position to apply. Please include any relevant code samples, blog posts, stack overflow questions or answers. We want to see what you've written and got a feeling for your communication style.

Benefits
  • Competitive salary
  • Medical/Dental benefits
  • 401K
  • Stock Options at rapidly growing start-up company (#9 on Inc. Magazine’s fastest growing private companies)
  • Remote friendly 
  • Daily standups with your team
  • Company laptop of your choice
  • All expenses paid travel to the yearly all-hands, held in beautiful Solana Beach, CA
If this sounds interesting to you, please apply online and include your resume, cover letter and relevant code samples, blog posts, and stack overflow Q&A.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 
Due to the number of applications we receive, we ask serious applicants to: 
  • upload a resume
  • write a cover letter (how do you fit this role, and tell us something interesting about you!)
  • provide relevant code samples - github and/or ____
  • blog posts
  • stack overflow Q&A
Applicants providing only a Resume or LinkedIn profile will not be considered. 
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 

  • 1 week ago
  • Scientist.com

Senior backend engineer (Java/Python/Postgres/AWS)

Buildings are responsible for 40% of the world’s energy footprint. A typical building contains thousands of pieces of equipment, sensors, and interconnections. Gridium makes software that helps people run their buildings better, at lower cost and with less energy.

We’re looking for a US-based engineer to design, develop, and scale our backend services. At Gridium we manage quite a bit of data, pouring in daily from hundreds of thousands of electric and gas meters. We need help gathering data, running analytics, and making the results available to our web applications. That’s where you come in.

Our stack lives on AWS and includes Docker, Postgres, Java, Python, and Ember.js. For this role, we’re looking for someone with strong Java, Python, and relational database experience. You’ll take ownership of a large, complex Java code base supporting mission-critical production workloads. At the same time, you’ll participate in evolving our system for better resiliency, scalability, and transparency.

You should be comfortable with consuming 3rd party APIs, ETL processes, data validation, and debugging across multiple systems. You should be able to make good decisions (and explain them!) about when to build something custom vs taking advantage of AWS and/or open source options.

We are a small team, and you should expect to work closely with both engineers and non-technical staff. We need someone who is self-directed and a great problem-solver, but also able to ask good questions and collaborate effectively with teammates. For example, you might trace a data issue from a 3rd party API to a Java parsing task to a relational database, then explain what’s wrong and how to fix it.

  • Do you enjoy a fast-moving startup environment?
  • Are you a wizard at debugging services with lots of moving parts?
  • Are you excited about what you can do with AWS products and services?
  • Are you obsessed with data, and experienced with data modeling?
  • Do you want to truly own the systems you work on?
  • Are you comfortable working in a remote environment?

If so, Gridium is the place for you.

Requirements
You must have strong experience with Java, Python, and relational databases.

You must currently live in, and have the legal right to work in, the United States. You must be available to travel for four days each quarter.

  • 1 week ago
  • Gridium

Data Engineer (Remote)

Wirecutter is seeking a Data Engineer to help build the infrastructure, data architecture, and pipelines that power our business.  In this role, you would report to the Engineering Manager for Data. This is a new position created as we continue to invest in the talent and support needed for our data.  

Data Engineers operate within a distributed, agile, cross-functional squad that includes a Product Manager, Engineering Manager, Project Manager, and other Data Engineers. The data squad has an organization-wide impact by providing the data to inform the user experience, product, editorial, growth, and financial decisions at Wirecutter.  The squad is responsible for the ETL processes, architecture, storage, reliability, accuracy, monitoring, and infrastructure surrounding our internal data and analytics.

Our data engineering tech stack consists of:
  • Shell & Python scripts on Linux hosted on AWS
  • Apache Airflow hosted on AWS
  • PostgreSQL Database hosted on AWS RDS
  • Google Analytics exports hosted on GCS
  • Looker BI tool
  • Github

You will:
  • Collaborate with your squad leaders and stakeholders on the scoping, planning, prioritization, successful execution, and rollout of complex technical projects to generate insights and addresses reporting needs.
  • Create new data models that are appropriately scalable, standardized, and reliable.
  • Evolve our current data models from production services into readily consumable formats for all downstream data consumption.
  • Help drive the optimization, testing, and tooling to improve data quality.
  • Write, debug, and test complex ETL processes for new or existing data pipelines.
  • Write and maintain database design and architecture documentation.
  • Support and maintain the integrity and security of our internal data.
  • Provide insight into changing database storage and utilization requirements.
  • Recommend solutions that best align with our product and business goals, as well as the quality, reliability, and secure storage and replication of our data.
  • Improve our development workflow and infrastructure.
  • Mentor and coach other members of your squad and the engineering team.
  • Contribute to engineering initiatives as a member of Wirecutter’s engineering team.

About You
  • You have 3+ years in software or data engineering and scaling large data sets.
  • You can design & optimize queries, data sets, and data pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs.
  • You understand the challenges of reliable data replication, optimizing for a data warehouse, and maintaining the integrity of a data lake.
  • You have experience reliably integrating and handling data from multiple APIs.
  • You have experience building applications at scale on any major cloud provider (AWS, GCP, etc.)
  • You are thoughtful, clear, and persuasive in writing and in person.
  • You have strong problem-solving skills and critical thinking abilities.
  • You have experience listening to business users, and can translate their needs into actionable tasks
  • You are excited to play a pivotal role in Wirecutter’s mission, innovation, and growth.
  • You are passionate and enthusiastic about what you do.
  • You have experience with version control, shell scripting, the Unix filesystem, and automating deployments.
  • Ideally, you have production experience with Python and Apache Airflow.
  • Ideally, you have experience with BI tools and managing data sets for BI tools.
  • Ideally, you have a basic understanding of statistics and sampling.
  • Ideally, you’ve worked as a member of a distributed team.

About Wirecutter
Founded five years ago by journalists fed up with the time and energy it takes to shop, Wirecutter developed a simpler approach to giving buying advice: Just tell people exactly what to get in one single guide. The company’s purpose: to help people find great things, quickly and easily. Through rigorous testing, research, reporting, and whatever means necessary, we create straightforward recommendations that save readers unnecessary stress, time, and effort.  We then monetize these guides by enabling our readers to easily purchase the products they are interested in.

Wirecutter was recently acquired by, and is now a subsidiary of, The New York Times Company.  As part of the acquisition, The Times is investing in Wirecutter to accelerate its business through editorial category expansion, development of more robust product features, and unlocking of new revenue streams.

Locations:
Even with offices in New York City and Los Angeles, Wirecutter remains a highly remote culture with employees across the United States. Right now, we are eligible to hire in the following locations:

CA, CO, CT, DC, FL, GA, HI, IL, MA, ME, MI, MN, MO, NC, NH, NV, NY, OR, PA, TX, VA, or WA.

The New York Times is committed to a diverse and inclusive workforce, one that reflects the varied global community we serve. Our journalism and the products we build in the service of that journalism greatly benefit from a range of perspectives, which can only come from diversity of all types, across our ranks, at all levels of the organization. Achieving true diversity and inclusion is the right thing to do. It is also the smart thing for our business. So we strongly encourage women, veterans, people with disabilities, people of color and gender nonconforming candidates to apply.

The New York Times Company is an Equal Opportunity Employer and does not discriminate on the basis of an individual's sex, age, race, color, creed, national origin, alienage, religion, marital status, pregnancy, sexual orientation or affectional preference, gender identity and expression, disability, genetic trait or predisposition, carrier status, citizenship, veteran or military status and other personal characteristics protected by law. All applications will receive consideration for employment without regard to legally protected characteristics.

  • 2 weeks ago
  • Wirecutter

Get alerts for new jobs

Senior Go (Golang) Data Engineer - Remote, US-based

Our homes are our most valuable asset and also the most difficult to buy and sell. Knock is on a mission to make home buying and selling simple and certain. Started by founding team members of Trulia.com (NYSE: TRLA, acquired by Zillow for $3.5B), Knock is an online home trade-in platform that uses data science to price homes accurately, technology to sell them quickly and a dedicated team of professionals to guide you every step of the way. We share the same top-tier investors as iconic brands like Netflix, Tivo, Match, HomeAway, and Houzz.

We are seeking a passionate Senior Data Engineer to help us design and build our data infrastructure, data aggregation and ingestion platform. This platform powers our proprietary pricing algorithms, data analytics, and our internal and customer-facing applications such as Knock.com website. You will integrate data from various sources (MLSes, assessor/tax and parcel data), and manage full data lifecycle (ETL).

Our data stack consists of Go, Python and Scala. We use ElasticSearch, Postgres, and Spark heavily. We are ownership-driven, and you will own your projects from design, implementation to operation. We are looking for someone who is passionate about creating great products to help millions of home buyers and sellers buy or sell a home without risk, stress, and uncertainty.

Responsibilities:

  • Design, architect, build and maintain big data infrastructure and tools.
  • Write reliable and efficient programs to handle a broad set of big data use cases.
  • Data qualification, verification and validation.
  • Committed to good engineering practice of testing, logging, alerting and deployment processes.

Requirements:

  • BS or MS in Computer Science, Statistics, Mathematics or equivalent.
  • Minimum of 5 years of full lifecycle software development experience in data engineering, including coding, testing, troubleshooting, and deployment.
  • Strong hands-on expertise with building resilient and reliable ETL pipelines.
  • Programming proficiency in Go, and at least one of Scala or Python.
  • Strong SQL knowledge (MySQL or Postgres), familiarity with techniques to identify slow queries and debugging
  • Experience working in the AWS data ecosystem (S3, RDS, EMR, Lambda, Redshift, MQs, Kinesis).
  • Understanding of containerized workloads (Docker, Kubernetes, ECS)
  • Strong desire to contribute to a rapidly growing startup and being comfortable with learning new tools and technologies.

Bonus points for knowledge of:

  • Real estate markets, MLS assessor/tax and parcel data
  • RETS/RESO APIs for extracting real estate data
  • GIS datasets (shapefiles, GeoJSON, etc)
  • Open source mapping data (OpenStreetMap (OSM), OpenAddresses)
  • Apache Spark
  • ElasticSearch

What we can offer you:

  • An amazing opportunity to be an integral part of building the next multi-billion dollar consumer brand around the single largest purchase of our lives.
  • Talented, passionate and mission-driven peers disrupting the status quo.
  • Competitive cash, full medical, dental, vision benefits, 401k, flexible work schedule, unlimited vacation (2 weeks mandatory) and sick time.
  • Flexibility to where you live and work within the United States.

We have offices in New York, San Francisco, Atlanta, Raleigh, Charlotte, and Dallas with more on the way, but we are also a distributed company with employees in 17 different states so we are open to any U.S. location for this role.

Knock is an Equal Opportunity Employer. Individuals seeking employment at Knock are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, or sexual orientation.

Please no recruitment firm or agency inquiries, you will not receive a reply from us.



  • 2 months ago
  • Knock.com

Technical Services Manager

Interfolio is on a mission to build smart, inspired and useful products for faculty and academic communities. By building an engine for faculty activity, decisions, and data, Interfolio has become the first mover in defining and owning the category of faculty-focused technology that cultivates goal-oriented collaboration around academic decision-making. Interfolio operates the first holistic faculty information system to support the full lifecycle of faculty work, from job seeking to review, tenure, sabbatical, committee work, research, and beyond. Offering colleges and universities increased clarity and insight into faculty data to help achieve their strategic initiatives, Interfolio believes that advancing the faculty will advance the institution. What’s even better than that? We’ve crafted a fun, collegial, dynamic culture that celebrates team and individual success almost daily. We’ve got a lean team of super-smart, super-hard working, local and remote colleagues who collaborate closely to produce a valuable service for an industry we’re passionate about. And, we genuinely like working with each other and with our clients. Like what you’ve heard so far? Then consider joining our Professional Services team. The position of Technical Services Manager is open to remote applicants as well as locals in commuting distance to our Washington, D.C. office. Applicants must be currently authorized to work in and reside in the United States on a full-time basis. Interfolio is committed to diversity and the principle of equal employment opportunity for all employees. You will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), national, social or ethnic origin, age, gender identity and/or expression, sexual orientation, family or parental status, or any status protected by the laws or regulations in locations where we operate. About the Position The Technical Services Manager is a member of Interfolio Services team and is ultimately responsible for all technical services that are delivered to our clients.  Initially, the key focus of this position will be to define, configure and be the initial consumer of an ETL framework. This framework will support our implementations by providing a seamless experience of data transfer, transformation and integration for the client.  This lead will be the main technical client facing lead to help support our client implementations. In addition to the integration management, this person will also support other technical services including SSO setup/maintenance and product API support and management. This person will also be responsible for building a technical services practice including updating existing processes, managing client deadlines, defining best practices in order for the services team to implement clients reliably and at increasing scale. Key attributes are a customer first mentality, an agile and adaptable mindset, self-motivated visionary spirit, capacity to give and receive constructive criticism, and a willingness to equally teach and learn.

Responsibilities

    • Work with the business team to clarify requirements and provide effective technical designs aligning with industry best practices
    • Drive the design, requirements and implementation of scalable, high performing and robust ETL applications
    • Provide technical mentorship and guidance to the services team data analysts and project management team
    • Conceptualize, develop and deploy data extraction procedures working collaboratively with new product clients
    • Manage client engagements with the support of the services data analyst team including project management
    • Ensure that barriers to accurate completion of Data Integration Services projects are anticipated and addressed
    • Support internal and external client project managers to ensure timely delivery of implementation where required
    • Build and foster strong relationships with all levels of technical and non-technical staff at client institutionsIdentify, recognize, and act on opportunities for improvement in order to advance business goals
    • Confront problems, change, and/or challenges quickly and enthusiastically
    • Support and lead calls for prospective client partners around Interfolio’s data integration methodology
    • Developing validation master plans, process flow diagrams, test cases, and standard operating procedures
    • Analyzing validation test data to figure out whether systems or processes have met validation criteria
    • Performing root cause analysis
    • Coordinating the implementation or scheduling of validation testing with affected client departments and personnel
    • Crafting, populating, or maintaining databases for tracking validation activities, test results, or validated systems
    • Support technical services including SSO setup/maintenance and product API support/management

Qualifications

    • Bachelor's degree
    • Experience building requirements, configuring and consuming ETL services and analytics applications that use AWS Glue, Redshift, Athena or Aurora
    • 5+ years of professional software development experience
    • Delivery experience with data integration projects
    • API management including ability to translate and explain technical concepts to client teams
    • Excellent customer communication and project management skills
    • Programming in any high level language, SQL scripts, and Java scripting
    • Understanding of relational databases including intermediate level SQL
    • Basic understanding on internet protocols, networks, and related technologies, including http, xml, REST and Soap web services
    • Applicants must be currently authorized to work in and reside in the United States on a full-time basis
In addition to a competitive salary, Interfolio offers a robust benefits package that includes medical insurance, unlimited PTO, a wellness benefit, 401k, and professional development opportunities. Our culture sets us apart—we look forward sharing more about our company and our team!
  • 3 months ago
  • Interfolio, Inc

Senior Data Engineer

Our millions of rides create an incredibly rich dataset that needs to be transformed, exposed and analyzed in order to improve our multiple products. By joining the Data Engineering team, you will be part of an early stage team who builds the data transport, collection and storage at Heetch. The team is quite new and you will have the opportunity to shape its direction while having a large impact. You will own Heetch's data platform by architecting, building, and launching highly scalable and reliable data pipelines that'll support our growing data processing and analytics needs. Your efforts will allow accessibility to incredible rich insights enlightening Data Analysts, Data Scientists, Operations managers, Product Managers and many others.


OUR ENGINEERING VALUES

• Move smart: we are data-driven, and employ tools and best practices to ship code quickly and safely (continuous integration, code review, automated testing, etc).
• Distribute knowledge: we want to scale our engineering team to a point where our contributions do not stop at the company code base. We believe in the Open Source culture and communication with the outside world.
• Leave code better than you found it: because we constantly raise the bar.
• Unity makes strength: moving people from A to B is not as easy as it sounds but, we always keep calm and support each other.
Always improve: we value personal progress and want you to look back proudly on what you’ve done.

WHAT YOU WILL DO

You will:
• Build large-scale batch data pipelines.
• Build large-scale real-time data pipelines.
• Be responsible for scaling up data processing flow to meet the rapid data growth at Heetch.
• Consistently improve and make evolve data model & data schema based on business and engineering needs.
• Implement systems tracking data quality and consistency.
• Develop tools supporting self-service data pipeline management (ETL).
• Tune jobs to improve data processing performance. • Implement data and machine learning algorithms (A/B testing, Sessionization,).

REQUIREMENT
• At least 4+ years in Software Engineering.
• Extensive experience with Hadoop.
• Proficiency with Spark or other cluster-computing framework.
• Advanced SQL query competencies (queries, SQL Engine, advanced performance tuning).
• Strong skills in scripting language (Python, Go, Java, Scala, etc.).
• Familiar with NoSQL technologies such as Cassandra or other.
• Experience with workflow management tools (Airflow, Oozie, Azkaban, Luigi).
• Comfortable working directly with data analytics to bridge business requirements with data engineering.
• Strong mathematical background.
• Inventive and self-started.

Bonus points
• Experience with Kafka.
• MPP database experience (Redshift, Vertica…).
• Experience building data models for normalizing/standardizing varied datasets for machine learning/deep learning.

PERKS
• Stocks.
• Paid conference attendance/travel.
• Heetch credits.
• A Spotify subscription.
• Code retreats and company retreats.
• Travel budget (visit your remote co-workers and our offices).

  • 5 months ago
  • Heetch

Integrations (Data) Engineer

Integrations (Data) Engineer

Interfolio is on a mission to build smart, inspired and useful products for faculty and academic communities. By building an engine for faculty activity, decisions, and data, Interfolio has become the first mover in defining and owning the category of faculty-focused technology that cultivates goal-oriented collaboration around academic decision-making.

Interfolio operates the first holistic faculty information system to support the full lifecycle of faculty work, from job seeking to review, tenure, sabbatical, committee work, research, and beyond. Offering colleges and universities increased clarity and insight into faculty data to help achieve their strategic initiatives, Interfolio believes that advancing the faculty will advance the institution.

What’s even better than that?

We’ve crafted a fun, collegial, dynamic culture that celebrates team and individual success almost daily. We’ve got a lean team of super-smart, super-hard working, local and remote colleagues who collaborate closely to produce a valuable service for an industry we’re passionate about. And, we genuinely like working with each other and with our clients.

Like what you’ve heard so far?

Then consider joining our Services team. The position of Integrations (Data) Engineer is open to remote employees as well as locals in commuting distance to our Washington, D.C. office.

Interfolio is committed to diversity and the principle of equal employment opportunity for all employees. You will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), national, social or ethnic origin, age, gender identity and/or expression, sexual orientation, family or parental status, or any status protected by the laws or regulations in locations where we operate.

About the Position

The Integrations (Data) Engineer is a member of Interfolio Services team and is responsible for the data level integrations between client systems and their Interfolio products.. You will be responsible for building a team and defining best practices so that Interfolio can implement clients reliably and at increasing scale. Key attributes are a customer first mentality, an agile and adaptable mindset, self-motivated visionary spirit, capacity to give and receive constructive criticism, and a willingness to equally teach and learn.

Primary Responsibilities:

  • Build the next generation of [Interfolio] ETL for Faculty180 using existing APIs, as well as additional API structures
  • Work with the business team to clarify requirements and provide effective technical designs aligning with industry best practices
  • Drive the design and implementation of scalable, high performing and robust ETL applications
  • Ensure a high level of quality through design and implementation of unit, system integration, and performance testing
  • Provide technical mentorship and guidance to more junior engineers
  • Conceptualize, develop and deploy data extraction procedures working collaboratively with new product clients
  • Develop recommendations for internal leadership around preferred data integration methodologies
  • Ensure barriers to accurate completion of Data Integration Services projects are anticipated and overcome
  • Work effectively with internal and external client project managers to ensure timely delivery of implementation
  • Build and foster strong relationships with all levels of technical and non-technical staff at client institutions
  • Identify, recognize, and act on opportunities for improvement in order to advance business goals
  • Confront problems, change, and/or challenges quickly and enthusiastically
  • Support and lead calls for prospective client partners around Interfolio’s data integration methodology
  • Developing validation master plans, process flow diagrams, test cases, and standard operating procedures
  • Analyzing validation test data to figure out whether systems or processes have met validation criteria
  • Performing root cause analysis
  • Coordinating the implementation or scheduling of validation testing with affected client departments and personnel
  • Crafting, populating, or maintaining databases for tracking validation activities, test results, or validated systems

Qualifications

  • Bachelor's degree
  • Experience architecting and delivering ETL services and analytics applications that use AWS Glue, Redshift, Athena or Aurora
  • Rest API development experience
  • 5+ years of professional software development experience
  • 4+ years of SQL knowledge, ideally on Postgres
  • Programming in any high level language, SQL scripts, and Java scripting
  • Understanding of fundamental software engineering, including data structures and programming constructs that would be found in non-trivial programs
  • Understanding of relational databases including intermediate level SQL
  • Basic understanding on internet protocols, networks, and related technologies, including http, xml, REST and Soap web services
  • Excellent customer communication and project management skills
  • Business Intelligence experience (Tableau, Sisense, Microstrategy, Qlik)
  • Big Data experience (AWS EMR)
  • Experience working in an Agile environment

In addition to a competitive salary, Interfolio offers a robust benefits package that includes medical insurance, unlimited PTO, a wellness benefit, 401k, and professional development opportunities. Our culture sets us apart—we look forward sharing more about our company and our team!

  • 6 months ago
  • Interfolio, Inc

Data Engineer

By joining Kraken, you’ll work on the bleeding edge of bitcoin and other digital currencies, and play an important role in helping shape the future of how the world sees and uses money. At Kraken, we constantly push ourselves to think differently and forge new paths in a rapidly growing industry fraught with unexplored territory, which is why Kraken has grown to be among the largest and most successful bitcoin exchanges in the world. If you’re truly interested in pushing the envelope by disrupting an industry that some say cannot be disrupted, then we just might have the job meant for you. Kraken is a place for dreamers and doers - to succeed here, we firmly believe you must possess each in spades. Check out all of our job postings here https://jobs.lever.co/kraken.

Responsibilities

    • Build scalable and reliable data pipeline that collects, transforms, loads and curates data from internal systems
    • Augment data platform with data pipelines from select external systems
    • Ensure high data quality for pipelines you build and make them auditable
    • Drive data systems to be as near real-time as possible
    • Support design and deployment of distributed data store that will be central source of truth across the organization
    • Build data connections to company's internal IT systems
    • Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store
    • Evaluate new technologies and build prototypes for continuous improvements in data engineering

Requirements

    • 5+ years of work experience in relevant field (Data Engineer, DW Engineer, Software Engineer, etc)
    • Experience with data warehouse technologies and relevant data modeling best practices
    • Experience building data pipelines/ETL and familiarity with design principles
    • Excellent SQL skills
    • Proficiency in a major programming language (e.g. Java, C++, etc.) and/or a scripting language (Javascript, Python, etc.)
    • Experience with business requirements gathering for data sourcing
  • 6 months ago
  • Kraken Bitcoin Exchange
Feedback