Skip to content

snamazova/two_armed_bandit_task

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

🏆 2-Armed Bandit Task

This script defines a 2-Armed Bandit Task for decision-making experiments. Participants must choose between two "bandits" (options), each with a different probability of giving rewards. The goal is to study learning and decision-making strategies.


✨ How It Works

  • There are 100 trials where a participant must choose Bandit 1 (Orange) or Bandit 2 (Blue).
  • Each bandit has a 80% chance of giving a reward and a 20% chance of failure (reward = 0).
  • After some numbers of trial, the reverse learning paradigm will be introduced by switching the probabilities of the bandits.

Reward Structure

Bandit Success Probability Possible Rewards
Bandit 1 (Orange) 80% 1
Bandit 2 (Blue) 80% 1
Failure (for both) 20% 0 (no reward)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages