CBMM faculty offer graduate and undergraduate courses that integrate computational and empirical approaches used in the study of problems related to intelligence. These courses introduce some of the mathematical frameworks used to formulate computational models, and experimental methods used in the fields of neuroscience and cognitive science to study the neural implementations of intelligent processes and manifestation of these computations in human cognitive behavior. Examples of the integration of these perspectives are drawn from current research on intelligence. Materials for many of these courses are available online. Most graduate courses are open to advanced undergraduates with appropriate background. Enrollment for courses is handled through the respective institutions.

Spring 2025

Harvard University

robot shape made up of word cloud
This is a seminar-style course which provides a foundational overview of key ideas in Computational Neuroscience and the study of Biological Intelligence. At the same time, the course will connect the study of brains to the blossoming and rapid development of ideas in Artificial Intelligence. Topics covered include the biophysics of computation, neural networks, machine learning, bayesian models, theory of learning, deep convolutional networks, generative adversarial networks, neural coding, control and dynamics of neural activity, applications to brain-machine interfaces, connectomics, among others. Lectures will be taught by leading Harvard experts in the field. Faculty include: Andrei Barbu, Richard Born, Boaz Barak, Jan Drugowitsch, Sam Gershman, Gabriel Kreiman, Susan Murph, Cengiz Pehlevan, Haim Sompolinsky, Tomer Ullman
Fall 2024

Massachusetts Institute of Technology (MIT)

Statistical Learning Theory and Applications
Michael Lee, Pierfrancesco Beneventano, Andrea Pinto, Eran Malach
Provides students with the knowledge needed to use and develop advanced machine learning solutions to challenging problems. Covers foundations and recent advances of machine learning in the framework of statistical learning theory. Focuses on regularization techniques key to high-dimensional supervised learning. Starting from classical methods such as regularization networks and support vector machines, addresses state-of-the-art techniques based on principles such as geometry or sparsity, and discusses a variety of algorithms for supervised learning, feature selection, structured prediction, and multitask learning. Also focuses on unsupervised learning of data representations, with an emphasis on hierarchical (deep) architectures.
Provides instruction on the mechanistic basis of intelligence - how the brain produces intelligent behavior and how we may be able to replicate intelligence in machines. Examines how human intelligence emerges from computations in neural circuits to reproduce similar intelligent behavior in machines. Working in teams, students complete computational projects and exercises that reinforce the theme of collaboration between (computer science + math) and (neuroscience + cognitive science). Culminates with student presentations of their projects. Instruction and practice in oral and written communication provided.

Harvard University

Jesse Snedeker, Elika Bergelson, Elizabeth Bonawitz
A proseminar in cognitive development: an interdisciplinary seminar, surveying what infants know and what and how children learn, based on research using behavioral methods, methods from systems and cognitive neuroscience, and methods from computational cognitive science.
Spring 2024

Harvard University

human brain with equations on it an pixalation occuring
This course explores Contemporary Brain Theory spanning local neuronal circuits as well as deep neural networks; examines the relationship between network structure, dynamics, and computation; introduces analytical and numerical tools from information theory, dynamical systems, statistics, statistical physics, AI, and machine learning for the study of neural computation. Key topics include computational principles in early sensory systems; methods in unsupervised and supervised learning; attractors, memory, and spatial functions in cortical circuits; understanding noise, chaos, and neural coding; and exploring learning, representations, and cognitive functions in deep neural networks in brains and machines.
Fall 2023

Harvard University

Visual recognition is essential for most everyday tasks including navigation, reading and socialization. Visual pattern recognition is also important for many engineering applications such as automatic analysis of clinical images, face recognition by computers, security tasks and automatic navigation. In spite of the enormous increase in computational power over the last decade, humans still outperform the most sophisticated engineering algorithms in visual recognition tasks. In this course, we will examine how circuits of neurons in visual cortex represent and transform visual information. The course will cover the following topics: functional architecture of visual cortex, lesion studies, physiological experiments in humans and animals, visual consciousness, computational models of visual object recognition, computer vision algorithms.
This course introduces students to applications of computational neuroscience in a workshop format. It is designed for students who are primarily experimentalists and would like to use computational models to understand their data. Each student should come with an experimental research project. By the end of the semester, the goal is to have a fully functioning model of the experimental data that have been collected. The course is structured as follows: (i) the first few classes will be lectures on the basics of modeling; (ii) each student will then present their project and their preliminary modeling ideas; (iii) another series of lectures will focus on teaching material specialized for the student projects; (iv) students will give a final presentation of their modeling results at the end of the semester. Grading is based on class participation, final presentations, and a written report due at the end of the semester.
Spring 2023

Harvard University

The theory of random magnetic systems, spin-glasses, has transformed our understanding of the impact of disorder and complexity in many areas such as physics, biology, computer science, statistics, neuroscience, and AI. The purpose of the course is to survey advanced spin glass theoretical approaches, including Replica Theory, Dynamic Mean Field Theory, the cavity method, and belief propagation. Applications include the physics of spin glasses, combinatorial optimization, random matrices, chaos in random recurrent networks, associative memory, and learning in deep neural networks.

The City University of New York: Hunter College

This course is intended for first-year students and has no prerequisites. It has been approved by The City University of New York as a Scientific World course, which means that it is accepted to fulfill a science distribution requirement. It is taught as a small lecture course that meets for 75 minutes twice a week for 14 weeks. There is a midterm, a final, and 8 assignments that require extensive reading and writing. The course is designed around three central questions: What is in your head? How does that make you behave? How could we build a machine like that?
Fall 2022

Harvard University

This course examines recent work applying computational models to mental disorders. These models formalize psychopathology in terms of breakdown in fundamental neurocognitive processes, linking normal and abnormal brain function within a common framework. Computational modeling has already begun to yield insights, and even possible treatments, for a wide range of disorders, including schizophrenia, autism, Parkinson’s, depression, obsessive-compulsive disorder, and attention-deficit hyperactivity disorder. The course will consist of weekly readings from the primary literature, with one student leading the discussion of each paper.
IAP 2022

Massachusetts Institute of Technology (MIT)

Photo of microscope
Course will be held the January 10-14, M-F, from 2-5pm, Virtual Class Provides instruction and dialogue on practical ethical issues relating to the responsible conduct of human and animal research in the brain and cognitive sciences. Specific emphasis on topics relevant to young researchers including data handling, animal and human subjects, misconduct, mentoring, intellectual property, and publication. Preliminary assigned readings and initial faculty lecture followed by discussion groups of four to five students each. A short written summary of the discussions submitted at the end of each class. See IAP Guide for registration information. 

Fall 2021

Massachusetts Institute of Technology (MIT)

Computational Cognitive Science
Introduction to computational theories of human cognition. Focuses on principles of inductive learning and inference, and the representation of knowledge. Computational frameworks include Bayesian and hierarchical Bayesian models, probabilistic graphical models, nonparametric statistical models and the Bayesian Occam's razor, sampling algorithms for approximate learning and inference, and probabilistic models defined over structured representations such as first-order logic, grammars, or relational schemas. Applications to understanding core aspects of cognition, such as concept learning and categorization, causal reasoning, theory formation, language acquisition, and social inference.
Introduction to cognitive development focusing on childrens' understanding of objects, agents, and causality. Develops a critical understanding of experimental design. Discusses how developmental research might address philosophical questions about the origins of knowledge, appearance and reality, and the problem of other minds. Provides instruction and practice in written communication as necessary to research in cognitive science (including critical reviews of journal papers, a literature review and an original research proposal), as well as instruction and practice in oral communication in the form of a poster presentation of a journal paper.

Harvard University

hand ringing bell and dog wagging tail
This course provides a tour of foundational topics in learning from a theoretical perspective. It covers a diversity of learning processes, aiming for breadth over depth (although it inevitably neglects several important forms of learning). Each meeting will consist of student-led presentations of two papers. Experience with computational modeling is not required, but students should have some familiarity with basic math (algebra and probability).
two infants sitting reading books
Despite recent advances in computer science and machine learning, human infants remain the most prodigious learners on the planet. This seminar considers the origins and nature of human cognitive development in four broad domains: knowledge of objects and their physical relationships, knowledge of people and social relationships, knowledge of geometry and the larger spatial layout, and knowledge of numbers and mathematics. We will discuss how these foundational cognitive building blocks support humans? ability to explain, understand, and generalize, skills that are critical for successfully navigating our surroundings. Understanding these core psychological competencies has become essential to progress in many areas of society, including efforts to improve education, to create digital ?cognitive assistants? who help us navigate, plan, and remember things, and to develop human-like artificial intelligence. Building on findings from basic research, we will consider how each of these efforts can be advanced.