Data Science Bootcamp

offered by Metis



All current dates & locations here:

This accredited, full-time, 12-week data science experience hones, expands, and contextualizes the skills brought in by our competitive student cohorts. Incorporating traditional in-class instruction in theory and technique, students use real data to build a five-project portfolio to present to potential employers and have access to full career support throughout and after the bootcamp. Upon graduating from the Data Science bootcamp, a student will be prepared to take an entry-level position on a Data Science team as a data scientist or data analyst.


The bootcamp experience is intense, but we aim to maximize learning while preventing burn-out. Each Monday – Friday consists of, on average, three hours of group classroom instruction and five hours of practical skill development and project work.

Online Pre-work

We’ll provide a Command Line Crash Course, tutorials to become familiar with Python, and a number of package installation tutorials (i.e., numpy, scipy, pandas, scikit.learn), as well as some preliminary statistics work. Test-out/check-out modules will let students know when they are “prepared enough” for class.

After the at-home pre-work phase, we will convene in class and spend our first 9 weeks together doing iterative, project-centered skill-acquisition. Over the course of four data science projects we’ll “train up” different key aspects of data science, and results from each project will be added to the students’ portfolios. In the last three weeks, students build out and complete their individual Passion Projects, culminating in a Career Day reveal of their work to representatives from our Metis Hiring Partners.


Week 1 : Jumping right in

Students will complete an entire bite-sized data science project from start to finish. We’ll start using the IPython environment and Git for version control, use the pandas package to perform exploratory statistical analyses, and publish the results using the matplotlib package.


Week 2: Design Process, Web Scraping

For Project 2, students start to learn one of the most important tools a data scientist uses: the iterative design process. We’ll learn tools for web scraping and start fitting simple models to data. Also, we introduce cloud computing: students will work on remote servers.

Week 3: Regression, Communicating Results

We’ll go in-depth on regression using modules from scikit.learn and matplotlib. Choosing among the analysis methods and approaches to reporting their results, students will finish the second project and present their findings.


Week 4: Databases, Machine Learning Concepts, Intro to Supervised Learning

We cover relational databases such as SQL and more ways of obtaining, cleaning and maintaining data. We overview the concepts of machine learning and introduce classification and supervised learning with a few examples such as logistic regression and KNN. We will also discuss different types of feasibility related to data science questions and projects.

Week 5: Supervised Learning In-Depth

More detail and more algorithms for supervised learning including SVM, decision trees and random forests; techniques for feature selection and feature extraction; concepts and applications for deep learning. Students will choose to apply one or more of these algorithms as part of this unit’s project.

Week 6: Data Visualization, JavaScript and D3

We will visualize our projects using D3.js, a favorite tool for flexible and attractive presentations of data and relationships. Since D3 is a JavaScript library, we’ll also cover some JavaScript essentials, and talk about incorporating other js libraries (jQuery, crossfilter, Bootstrap, etc.) that can make the job much easier.


Week 7: APIs and other Data Acquisition Methods, NoSQL Databases

The project for the fourth unit will involve text data. We’ll round out data acquisition methods with APIs and online database servers. The students will also learn about NoSQL databases and start using MongoDB.

Week 8: NLP, Naive Bayes, Special “Big Data” Topics

We’ll analyze the text data collected in the previous week and learn about naive Bayes and NLP algorithms. We’ll learn about how large amounts of data are handled, discussing parallel computing and Hadoop MapReduce.

Week 9: Unsupervised Learning

Greater depth on unsupervised learning and more algorithms, covering K-means, hierarchical clustering, mixture models and topic models. Project 4 presentations will be presented as lightning talks.


Weeks 10-12: Passion Projects

Students switch gears and work full time on their Passion Projects (which they’ve been designing in bits and pieces through the first 9 weeks). They will also learn more on cloud computing, system architectures and feasibility evaluations.

Career Day:

One of the most exciting days of Metis bootcamp! Present your final passion project to employers and meet hiring companies. Prior to Career Day, when graduates demo their passion projects, student also have mock interview. The interviews are be conducted by a mix of data scientists from Moat, Kaplan, and Metis, including Aaron Schumacher, who just joined Metis as an instructor, having most recently worked as a data scientist at Booz Allen Hamilton.


Upon graduating from the Data Science bootcamp, a student will be prepared to take an entry-level position on a Data Science team as a data scientist or data analyst. This means a student will:

– Have a fluid understanding of and practical experience with the process of designing, implementing, and communicating the results of a data science project.
– Be capable coders in Python and at the command line, including the related packages and toolsets most commonly used in data science.
– Understand the landscape of data science tools and their applications, and be prepared to identify and dig into new technologies and algorithms needed for the job at hand.
– Know the fundamentals of data visualization and have experience creating static and dynamic data visuals using JavaScript and d3.js.
– Have introductory exposure to modern big data tools and architecture such as the Hadoop stack, know when these tools are necessary, and be poised to quickly train up and utilize them in a big data project.