🌻
Models
  • Step by step intro
  • Bash
  • Git
    • Remove folder
  • Embedding
    • Normalize Input
    • One-hot
  • Hyperparameter tuning
    • Test vs Validation
    • Bias vs Variance
    • Input
      • Normalize input
      • Initialize weight
    • Hidden Layer
      • Hidden layer size
    • Learning Rate
      • Oscillate learning rate
      • Learning rate finder
    • Batch Size
    • Epoch
    • Gradient
      • Vanishing / Exploding Gradients
      • Gradient Checking
    • Cost Function
      • Loss Binary Cross Entropy
    • Regularization
      • Lā‚‚ regularization
      • L₁ regularization
      • Dropout regularization
      • Data augmentation
      • Early stopping
  • Fine-tuning
    • Re-train on new data
    • Freeze layer/weight
  • Common Graphing Stats
    • Confidence interval (CI) and error bar
    • Confusion matrix and type I type II error
    • Effect size
  • Models
    • Inverted Pendulum Model
    • Recurrent Neural Networks
      • GRU and LSTM
      • Neural Turing Machines
    • Hopfield
    • Attention
      • Re-attention
      • Enformer
    • Differential Equations
      • Ordinary Differential Equations
        • Language Ordinary Differential Equations (ODE)
        • Neural Ordinary Differential Equations (ODE)
          • Adjoint Sensitive Method
          • Continuous Backpropagation
          • Adjoint ODE
      • Partial Differential Equations
      • Stochastic Differential Equations
    • Knowledge Tracing Models
      • Bayesian Knowledge Tracing
    • trRosetta
    • Curve up grades
  • deeplearning.ai
    • Neural Networks and Deep Learning
      • Wk2 - Python Basics with Numpy
      • Wk2 - Logistic Regression with a Neural Network mindset
      • Wk3 - Planar data classification with a hidden layer
      • Wk4 - Building your Deep Neural Network: Step by Step
      • Wk4 - Deep Neural Network - Application
    • Hyperparameter Tuning, Regularization and Optimization
      • Wk1 - Initialization
      • Wk1 - Regularization
      • Wk1 - Gradient Checking
    • Structuring Machine Learning Projects
    • Convolutional Neural Networks
    • Sequence Models
  • Neuroscience Paper
    • Rotation and Head Direction
    • Computational Models of Memory Search
    • Bayesian Delta-Rule Model Explains the Dynamics of Belief Updating
    • Sensory uncertainty and spatial decisions
    • A Neural Implementation of the Kalman Filter
    • Place cells, spatial maps and the population code for memory (Hopfield)
    • Spatial Cognitive Map
    • Event Perception and Memory
    • Interplay of Hippocampus and Prefrontal Cortex in Memory
    • The Molecular and Systems Biology of Memory
    • Reconsidering the Evidence for Learning in Single Cells
    • Single Cortical Neurons as Deep Artificial Neural Networks
    • Magnetic resonance-based eye tracking using deep neural networks
Powered by GitBook
On this page

Was this helpful?

  1. Models
  2. Knowledge Tracing Models

Bayesian Knowledge Tracing

probabilistic model

  1. pL – latent (mastery)

  2. pT – transition (learning)

  3. pG – guess

  4. pS – slip

  5. Learning order (K)

  6. Problem difficulty

  7. Prior knowledge (initial assessment + sequential)

  8. *Learning rate/speed (derivatives / ODE or PDE)

Equation(a):p(L1)uk=p(L0)kEquation (a): {\displaystyle p(L_{1})_{u}^{k}=p(L_{0})^{k}}Equation(a):p(L1​)uk​=p(L0​)k
#@title Initialize Parameters

import numpy as np

def initialize_parameters(pL, pT, pS, pG):
  
  np.random.seed(1)

  pL =  np.random.randn(1)
  pT =  np.random.randn(1)
  pS =  np.random.randn(1)
  pG =  np.random.randn(1)

  parameters = {
      'pL': pL,
      'pT': pT,
      'pS': pS,
      'pG': pG
  }

  return parameters
Equation(b):p(Lt∣obs=correct)uk=p(Lt)ukā‹…(1āˆ’p(S)k)p(Lt)ukā‹…(1āˆ’p(S)k)+(1āˆ’p(Lt)uk)ā‹…p(G)kEquation (b): {\displaystyle p(L_{t}|obs=correct)_{u}^{k}={\frac {p(L_{t})_{u}^{k}\cdot (1-p(S)^{k})}{p(L_{t})_{u}^{k}\cdot (1-p(S)^{k})+(1-p(L_{t})_{u}^{k})\cdot p(G)^{k}}}}Equation(b):p(Ltā€‹āˆ£obs=correct)uk​=p(Lt​)uk​⋅(1āˆ’p(S)k)+(1āˆ’p(Lt​)uk​)ā‹…p(G)kp(Lt​)uk​⋅(1āˆ’p(S)k)​
#@title Eq. b

# Two state (0 or 1)
# 0 when pL < 0.5; 1 when pL >= 0.5

def correct_latent(pL, pS, pG, T):

  parameters = {}

  for t in range(T-1):
    parameters['pL_correct_obs' + str(t)] = np.dot(pL, (1 - pS))/(np.dot(pL, (1 - pS)) + np.dot((1 - pL), pG))

  return 'pL_correct_obs' + str(t)
Equation(c):p(Lt∣obs=wrong)uk=p(Lt)ukā‹…p(S)kp(Lt)ukā‹…p(S)k+(1āˆ’p(Lt)uk)ā‹…(1āˆ’p(G)k)Equation (c): {\displaystyle p(L_{t}|obs=wrong)_{u}^{k}={\frac {p(L_{t})_{u}^{k}\cdot p(S)^{k}}{p(L_{t})_{u}^{k}\cdot p(S)^{k}+(1-p(L_{t})_{u}^{k})\cdot (1-p(G)^{k})}}}Equation(c):p(Ltā€‹āˆ£obs=wrong)uk​=p(Lt​)uk​⋅p(S)k+(1āˆ’p(Lt​)uk​)ā‹…(1āˆ’p(G)k)p(Lt​)uk​⋅p(S)k​
#@title Eq. c

# Two state (0 or 1)
# 0 when pL < 0.5; 1 when pL >= 0.5

def wrong_latent(pL, pS, pG, T):

  parameters = {}

  for t in range(T-1):
    parameters['pL_wrong_obs' + str(t)] = np.dot(pL, pS)/(np.dot(pL, pS) + np.dot((1 - pL), (1 - pG)))

  return 'pL_wrong_obs' + str(t)
Equation(d):p(Lt+1)uk=p(Lt∣obs)uk+(1āˆ’p(Lt∣obs)uk)ā‹…p(T)kEquation (d): {\displaystyle p(L_{t+1})_{u}^{k}=p(L_{t}|obs)_{u}^{k}+(1-p(L_{t}|obs)_{u}^{k})\cdot p(T)^{k}}Equation(d):p(Lt+1​)uk​=p(Ltā€‹āˆ£obs)uk​+(1āˆ’p(Ltā€‹āˆ£obs)uk​)ā‹…p(T)k
#@title Eq. d

def update_latent(pL_obs, pT, T, condition):

  parameters = {}

  if condition == 'correct':  
    for t in range(T-1):
      pL = 'pL_correct_obs' + str(t)
      parameters['pL' + str(t + 1)] = pL + np.multiply((1 - pL), pT)

  elif condition == 'wrong':  
    for t in range(T-1):
      pL = 'pL_wrong_obs' + str(t)
      parameters['pL' + str(t + 1)] = pL + np.multiply((1 - pL), pT)
  
  return 'pL' + str(t + 1)
Equation(e):p(Ct+1)uk=p(Lt+1)ukā‹…(1āˆ’p(S)k)+(1āˆ’p(Lt+1)uk)ā‹…p(G)kEquation (e): {\displaystyle p(C_{t+1})_{u}^{k}=p(L_{t+1})_{u}^{k}\cdot (1-p(S)^{k})+(1-p(L_{t+1})_{u}^{k})\cdot p(G)^{k}}Equation(e):p(Ct+1​)uk​=p(Lt+1​)uk​⋅(1āˆ’p(S)k)+(1āˆ’p(Lt+1​)uk​)ā‹…p(G)k
#@title Eq. e

def observation(pL, pS, pG, T):

  parameters = {}

  for t in range(T-1):
    parameters['pC' + str(t + 1)] = np.dot(pL, (1 - pS)) + np.dot((1 - pL), pG)

  return 'pC' + str(t + 1)

PreviousKnowledge Tracing ModelsNexttrRosetta

Last updated 3 years ago

Was this helpful?