Applied Probability Models for CS

(2016-2017)

 

Administration

 

Lecturer: Prof. Ido Dagan Email: dagan@cs.biu.ac.il

Class Hours: Thursday 10:00-12:00

Exercise Grader: Vered Shwartz Email: vered1986@gmail.coma

 

Messages

13.02.17

Final exercise grades are available here.

12.02.17

Exercise 3 grades are available (see below). Detailed feedback will be sent to your emails soon.

09.02.17

Exercise 2 grades are available (see below). Detailed feedback will be sent to your emails soon. If you got 0 in the automatic grading, please contact me.

05.02.17

The solution for last year's exam is available here.

26.01.17

Exercise 3 deadline extended to Sunday, 05.02.17. See update about the exercise below.

26.01.17

Exams from previous years are available here. This year we didn't learn about backoff and interpolation, so please ignore questions about these topics.

12.01.17

Exercise 2 deadline extended to Sunday, 15.01.17.

12.01.17

Exercise 3 is available (see below).

29.12.16

Sanity check for exercise 2: the best perplexity should be in the range 1000-3000.

23.12.16

Exercise 2 is available (see below).
For now, you can start with Lidstone, and you will learn held out next week.

22.12.16

Exercise 1 solution and grades are available (see below).

29.11.16

Exercise 1 deadline was extended to 08.12.16

22.11.16

Exercise 1 is available (see below).

22.11.16

See basic probability equations summary page.

 

Exercises

Make sure to submit your exercises on time! Points will be taken off for late submissions.

Programming exercises should be submitted via the Submit web interface. Please make sure that you can login to this system.

Ex1 is to be submitted individually by each student. You are encouraged to do the rest of the exercises in pairs.

 

Ex3

Ex3 Data Underflow and smoothing in EM Ex3 Grades

Due on 02.02.17 the end of the semester (05.02.17). This exercise should be done in pairs.


A note about the exercise: computing the mean perplexity (question (2) in the report) is now optional. For those of you who still want to compute it: in the previous exercise you used it to compare between language models, but perplexity is not restricted to language models. Perplexity is a measurement of how well a probabilistic model predicts a sample, and it is used to compare between probabilistic models. In this exercise, you need to use it to measure how well your model predicts articles' topics. Compute it using the model's log likelihood and normalize it by the size of the dataset (the number of words): 2^(-1/N * log-likelihood).


Ex2

Ex2 Download Ex2 Solution Ex2 Grades

Due on 12.01.17 15.01.17 (submit via Submit). This exercise should be done in pairs.


Ex1

Ex1 Download Ex1 Solution Ex1 Grades

Due on 01.12.16 08.12.16 in class (submit hard copies only NOT via Submit). This exercise should be done individually by each student.