SML 543
Machine Learning: A practical introduction for humanists and social scientists
Syllabus - Spring 2026
Sarah-Jane Leslie
sjleslie@princeton.edu
sarahjaneleslie.org
Machine learning – and in particular, deep learning – is rapidly opening new horizons for research in the humanities and social sciences. However, scholars in the humanities and social sciences can encounter barriers to learning about such techniques – for example, machine learning courses, especially at the graduate level, often require multivariate calculus, linear algebra and prior coding experience, which students in the humanities and less quantitative social sciences may lack. This course offers a practical introduction to deep learning for graduate students, without assuming knowledge of calculus or other college-level math, or any prior experience with coding. By the end of the course, students will:
- be able to code and train a variety of basic deep learning models
- develop an appreciation of the range of humanities/social science research questions to which deep learning can be applied
- be fluent collaborators on research projects that involve machine learning experts
- gain an understanding than will inform theorizing about machine learning (e.g., for research in AI ethics, technology policy, etc)
Here is a Princeton homepage story on the spring 2023 version of the class, which features some helpful student perspectives.
Readings: The primary text for the course is Francois Chollet & Matthew Watson, 2026, Deep learning with Python, 3rd edition. Manning. We will work our way through the book in detail. At various points in the course, I will post supplemental reading materials to Canvas.
Enrollment eligibility: Open only to Princeton graduate students, except with special permission of the instructor. Princeton undergraduates should take the undergraduate version of this course, SML 354
Evaluation and assignments: There are simple weekly coding assignments to cement knowledge and build skills. Students taking the course for credit are required to complete these, though they are only lightly graded. (A reasonable effort will typically result in full credit.) In addition, there are three properly graded projects spaced throughout the semester. For these projects, students will be supplied with a data set and asked to build a model that addresses a question appropriate to the data set. Evaluation will be based on the appropriateness of the model for the question and the success of the code used to build/train the model, as well as answers to any other questions posed along the way. The three projects will cover the following areas respectively: classification/regression, computer vision, natural language processing. (For philosophy students only: successful completion of the above assignments will constitute a logic unit. Philosophy students wishing to receive a unit other than logic should contact me directly.)
Final grades are determined in the following way: 20% homework, 20% classification/regression assignment, 20% computer vision assignment, 30% natural language processing assignment, and 10% class participation.
Auditing: Graduate students are welcome to audit this course, however I request that even auditors engage with the weekly coding assignments, since the value of this class is largely associated with the development of practical skills. As with any kind of coding, these skills can only be learned by doing. Post-docs and other researchers/faculty are welcome to informally audit the class also.
Important: Getting started in Python
Coding prerequisities: Prior coding experience is not required for this class, but students without Python experience will need to complete a crash course in the language. In particular, I have made video lectures covering Python basics which are available via Canvas for anyone with a Princeton netid. To view the videos, please self-enroll here.
The first several homeworks will assume that you have watched certain portions of the crash course, so you can do this concurrently with the start of classes, however if you are new to coding I strongly recommend getting started before the beginning of the semester so you can work at your own pace and do additional practice as needed. If you fall behind on this once the semester starts, it may be prohibitively difficult to catch up. The videos have accompanying code notebooks with exercises. If you can complete the exercises in the notebooks, that means you are on a good track. Students with prior experience in Python should test their understanding by making sure they can easily complete those exercises.
Please note that the video tutorial does not cover all aspects of Python; rather the focus is on the key aspects of the language required for the course.
Additional resources: A repository of good tutorials can be found here. For the W3 Schools tutorial, You might also take a look at Princeton Research Computing's offerings, as they feature mini-courses on Python at various times throughout the year.
Schedule of Topics
Week 1: Overview of machine learning
Reading: Chollet, chapter 1
Week 2: Fundamentals of neural networks I: The forward pass
Reading: Chollet & Watson, sections 2.1-2.3
Week 3: Fundamentals of neural networks II: Backpropagation
Reading: Chollet & Watson, 2.4-end of chapter
Week 4: Classification and regression
Reading: Softmax Tutorial and Tasks, Activations, and Losses posted to Canvas. Optional reading on deeper dive on softmax/sigmoid available on Canvas.
Week 5: Introduction to Keras; Training, validation, and test sets
Reading: Chollet & Watson, sections 3.1, 3.2, 3.6 (skipping 3.6.1) & 4.1-4.2; section 7.2.2
Week 6: Finish any outstanding topics; Review; Begin computer vision, time permitting
Reading: Chollet & Watson, sections 5.2-5.4; chapter 6 (optional)
Classification/regression project assignment distributed
Spring break
Week 7: Computer vision I: Introduction to convolutional neural networks
Reading: Chollet & Watson, sections 8.1-8.2; Tutorial on RBG images posted to Canvas
Classification/regression project due
Week 8: Computer vision II: classification; dropout regularization; visualizing convnets
Reading: Chollet & Watson, 8.3; Reading/Tutorial on leveraging pretrained convnets and on structuring folders in Python posted to Canvas.
Computer vision project assignment distributed
Week 9: Introduction to natural language processing: Representing words as numbers
Reading: Reading/tutorial posted to Canvas; Jay Alammar, Illustrated Word2Vec; Chollet & Watson, 14.5.2, 14.5.4-end
Optional: rest of Chollet & Watson chapter 14. Optional social science application of word embeddings posted to Canvas.
Week 10: Transformers I: Introduction; Masked language models
Reading: Jay Alammar, The Illustrated Transformer, Chollet & Watson sections 15.3-15.3.2, 15.3.5, 15.5
Computer vision project due
Week 11: Transformers II: Generative models
Reading: Chollet & Watson chapter 15.3.3
NLP project distributed
Week 12: Transformers III: Generative models cont.; Putting it all together: Review and loose ends
Optional Reading: Chollet & Watson, chapter 19
NLP project due; exact date TBD
Below is an eclectic collection of supplemental readings that may be of interest as we go along, particularly in the second half of the semester.
Kapoor, S. & Narayanan, A. (2022)
Melvin Wevers, Thomas Smits, The visual digital turn: Using neural networks to study historical images, Digital Scholarship in the Humanities, Volume 35, Issue 1, April 2020, Pages 194–207.
Leszek M. Pawlowicz, Christian E. Downum, Applications of deep learning to decorated ceramic typology and classification: A case study using Tusayan White Ware from Northeast Arizona, Journal of Archaeological Science, Volume 130, 2021, 105375.</p>
Charlesworth, T.E.S., Caliskan, A., & Banaji, M.R., (2022) Historical representations of social groups across 200 years of word embeddings from Google Books. Proceedings of the National Academy of Sciences, 119(28).
Grand, G., Blank, I.A., Pereira, F. et al. Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nat Hum Behav 6, 975–987 (2022).
Assael, Y., Sommerschield, T., Shillingford, B. et al. Restoring and attributing ancient texts using deep neural networks. Nature 603, 280–283 (2022).
Manning, C., Clark, K., Hewitt, J., & Levy, O. Emergent linguistic structure in artificial neural networks trained by self-supervision. PNAS 117 (48).
Osmanovic Thunström, A. (2022). We Asked GPT-3 to Write an Academic Paper about Itself—Then We Tried to Get It Published. Scientific American, June 30 2022.
Gpt Generative Pretrained Transformer, Almira Osmanovic Thunström, Steinn Steingrimsson. Can GPT-3 write an academic paper on itself, with minimal human input?. 2022. hal-03701250.
Hutson, M. (2021). Robo-writers: the rise and risks of language-generating AI. Nature 591, 22-25.
Stokel-Walker & Van Noorden (2023). What ChatGPT and generative AI mean for science. Nature.
Tregoning, J. (2023). AI writing tools could give scientists the gift of time. Nature.
van Dis, E. et al. (2023). ChatGPT: Five priorities for research. Nature.
Benn, C., & Lazar, S. (2022). What’s wrong with Automated Influence. Canadian Journal of Philosophy, 1-24.</p>