The New York Times Company is an Equal Opportunity Employer and does not discriminate on the basis of an individual's sex, age, race, color, creed, national origin, alienage, religion, marital status, pregnancy, sexual orientation or affectional preference, gender identity and expression, disability, genetic trait or predisposition, carrier status, citizenship, veteran or military status and other personal characteristics protected by law. All applications will receive consideration for employment without regard to legally protected characteristics.
Our homes are our most valuable asset and also the most difficult to buy and sell. Knock is on a mission to make home buying and selling simple and certain. Started by founding team members of Trulia.com (NYSE: TRLA, acquired by Zillow for $3.5B), Knock is an online home trade-in platform that uses data science to price homes accurately, technology to sell them quickly and a dedicated team of professionals to guide you every step of the way. We share the same top-tier investors as iconic brands like Netflix, Tivo, Match, HomeAway, and Houzz.
We are seeking a passionate Senior Data Engineer to help us design and build our data infrastructure, data aggregation and ingestion platform. This platform powers our proprietary pricing algorithms, data analytics, and our internal and customer-facing applications such as Knock.com website. You will integrate data from various sources (MLSes, assessor/tax and parcel data), and manage full data lifecycle (ETL).
Our data stack consists of Go, Python and Scala. We use ElasticSearch, Postgres, and Spark heavily. We are ownership-driven, and you will own your projects from design, implementation to operation. We are looking for someone who is passionate about creating great products to help millions of home buyers and sellers buy or sell a home without risk, stress, and uncertainty.
Bonus points for knowledge of:
What we can offer you:
We have offices in New York, San Francisco, Atlanta, Raleigh, Charlotte, and Dallas with more on the way, but we are also a distributed company with employees in 17 different states so we are open to any U.S. location for this role.
Knock is an Equal Opportunity Employer. Individuals seeking employment at Knock are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, or sexual orientation.
Please no recruitment firm or agency inquiries, you will not receive a reply from us.
Our millions of rides create an incredibly rich dataset that needs to be transformed, exposed and analyzed in order to improve our multiple products. By joining the Data Engineering team, you will be part of an early stage team who builds the data transport, collection and storage at Heetch. The team is quite new and you will have the opportunity to shape its direction while having a large impact. You will own Heetch's data platform by architecting, building, and launching highly scalable and reliable data pipelines that'll support our growing data processing and analytics needs. Your efforts will allow accessibility to incredible rich insights enlightening Data Analysts, Data Scientists, Operations managers, Product Managers and many others.
OUR ENGINEERING VALUES
• Move smart: we are data-driven, and employ tools and best practices to ship code quickly and safely (continuous integration, code review, automated testing, etc).
• Distribute knowledge: we want to scale our engineering team to a point where our contributions do not stop at the company code base. We believe in the Open Source culture and communication with the outside world.
• Leave code better than you found it: because we constantly raise the bar.
• Unity makes strength: moving people from A to B is not as easy as it sounds but, we always keep calm and support each other.
• Always improve: we value personal progress and want you to look back proudly on what you’ve done.
WHAT YOU WILL DO
• Build large-scale batch data pipelines.
• Build large-scale real-time data pipelines.
• Be responsible for scaling up data processing flow to meet the rapid data growth at Heetch.
• Consistently improve and make evolve data model & data schema based on business and engineering needs.
• Implement systems tracking data quality and consistency.
• Develop tools supporting self-service data pipeline management (ETL).
• Tune jobs to improve data processing performance. • Implement data and machine learning algorithms (A/B testing, Sessionization,).
• At least 4+ years in Software Engineering.
• Extensive experience with Hadoop.
• Proficiency with Spark or other cluster-computing framework.
• Advanced SQL query competencies (queries, SQL Engine, advanced performance tuning).
• Strong skills in scripting language (Python, Go, Java, Scala, etc.).
• Familiar with NoSQL technologies such as Cassandra or other.
• Experience with workflow management tools (Airflow, Oozie, Azkaban, Luigi).
• Comfortable working directly with data analytics to bridge business requirements with data engineering.
• Strong mathematical background.
• Inventive and self-started.
• Experience with Kafka.
• MPP database experience (Redshift, Vertica…).
• Experience building data models for normalizing/standardizing varied datasets for machine learning/deep learning.
• Paid conference attendance/travel.
• Heetch credits.
• A Spotify subscription.
• Code retreats and company retreats.
• Travel budget (visit your remote co-workers and our offices).
Integrations (Data) Engineer
Interfolio is on a mission to build smart, inspired and useful products for faculty and academic communities. By building an engine for faculty activity, decisions, and data, Interfolio has become the first mover in defining and owning the category of faculty-focused technology that cultivates goal-oriented collaboration around academic decision-making.
Interfolio operates the first holistic faculty information system to support the full lifecycle of faculty work, from job seeking to review, tenure, sabbatical, committee work, research, and beyond. Offering colleges and universities increased clarity and insight into faculty data to help achieve their strategic initiatives, Interfolio believes that advancing the faculty will advance the institution.
What’s even better than that?
We’ve crafted a fun, collegial, dynamic culture that celebrates team and individual success almost daily. We’ve got a lean team of super-smart, super-hard working, local and remote colleagues who collaborate closely to produce a valuable service for an industry we’re passionate about. And, we genuinely like working with each other and with our clients.
Like what you’ve heard so far?
Then consider joining our Services team. The position of Integrations (Data) Engineer is open to remote employees as well as locals in commuting distance to our Washington, D.C. office.
Interfolio is committed to diversity and the principle of equal employment opportunity for all employees. You will receive consideration for employment without regard to race, color, religion, sex (including pregnancy), national, social or ethnic origin, age, gender identity and/or expression, sexual orientation, family or parental status, or any status protected by the laws or regulations in locations where we operate.
About the Position
The Integrations (Data) Engineer is a member of Interfolio Services team and is responsible for the data level integrations between client systems and their Interfolio products.. You will be responsible for building a team and defining best practices so that Interfolio can implement clients reliably and at increasing scale. Key attributes are a customer first mentality, an agile and adaptable mindset, self-motivated visionary spirit, capacity to give and receive constructive criticism, and a willingness to equally teach and learn.
In addition to a competitive salary, Interfolio offers a robust benefits package that includes medical insurance, unlimited PTO, a wellness benefit, 401k, and professional development opportunities. Our culture sets us apart—we look forward sharing more about our company and our team!
By joining Kraken, you’ll work on the bleeding edge of bitcoin and other digital currencies, and play an important role in helping shape the future of how the world sees and uses money. At Kraken, we constantly push ourselves to think differently and forge new paths in a rapidly growing industry fraught with unexplored territory, which is why Kraken has grown to be among the largest and most successful bitcoin exchanges in the world. If you’re truly interested in pushing the envelope by disrupting an industry that some say cannot be disrupted, then we just might have the job meant for you. Kraken is a place for dreamers and doers - to succeed here, we firmly believe you must possess each in spades. Check out all of our job postings here https://jobs.lever.co/kraken.