Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,15 @@
.venvclosed_loop_autoscidis/researcher_hub/firebase-service-account.json
.venv
closed_loop_autoscidis/researcher_hub/firebase-service-account.json
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
*.egg
*.egg-info
dist
build
node_modules
.DS_Store
*.log
155 changes: 155 additions & 0 deletions QUICKSTART.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
# Quick Start Guide: Digit Memory Experiment

## What Has Been Implemented

A complete digit memory experiment is now integrated into your closed-loop AutoRA workflow. Participants see random digit sequences, memorize them, and type them back. The system automatically collects data and uses it to build theoretical models.

## Experiment Details

- **What it measures**: Memory span for digit sequences
- **Independent Variable**: `n_digits` (3-9) - how many digits are shown
- **Dependent Variable**: `accuracy` (0 or 1) - whether the participant was correct
- **Duration**: ~30-60 seconds per participant (depends on number of trials)

## How to Deploy

### 1. Prerequisites
```bash
# In researcher_hub directory
cd closed_loop_autoscidis/researcher_hub
pip install -r requirements.txt

# In testing_zone directory
cd closed_loop_autoscidis/testing_zone
npm install
```

### 2. Add Firebase Credentials

Place your Firebase service account JSON file here:
```
closed_loop_autoscidis/researcher_hub/firebase-service-account.json
```

You can get this file from:
1. Go to Firebase Console β†’ Project Settings β†’ Service Accounts
2. Click "Generate New Private Key"
3. Save as `firebase-service-account.json`

### 3. Deploy Web Experiment

```bash
cd closed_loop_autoscidis/testing_zone
npm run build
firebase deploy
```

Your experiment will be available at the URL shown after deployment.

### 4. Run the Workflow

```bash
cd closed_loop_autoscidis/researcher_hub
python autora_workflow.py
```

The workflow will:
1. Generate initial experiment conditions (random n_digits values)
2. Upload experiments to Firebase
3. Wait for participants to complete them
4. Download and preprocess the data
5. Fit three theoretical models (Nuts, BMS, Logistic)
6. Use model disagreement to select new conditions
7. Repeat for the configured number of cycles

## Configuration

Edit these variables in `autora_workflow.py`:

```python
num_cycles = 2 # How many closed-loop iterations
num_trials = 4 # Trials per experiment session
num_conditions_per_cycle = 1 # Conditions to test per cycle
N_DIGITS_LEVELS = list(range(3, 10)) # Range of n_digits (3-9)
```

## Output

After running, you'll get:

1. **experiment_data.csv** - All collected data:
```
n_digits,accuracy
3,1.0
5,0.0
7,1.0
```

2. **model_comparison.png** - Visual comparison of the three models showing:
- How accuracy changes with n_digits
- Model predictions vs. actual data
- Three subplots (Logistic, BMS, Nuts)

## Expected Results

Typical pattern: As `n_digits` increases, `accuracy` decreases (it's harder to remember more digits).

Example:
- 3 digits: ~90% accuracy
- 5 digits: ~70% accuracy
- 7 digits: ~40% accuracy
- 9 digits: ~20% accuracy

The theoretical models will capture this relationship and predict where to collect more data.

## Troubleshooting

### "ModuleNotFoundError: No module named 'autora'"
```bash
pip install -r requirements.txt
```

### "Firebase credentials not found"
Make sure `firebase-service-account.json` is in the `researcher_hub/` directory.

### "No data collected"
- Check that your experiment is deployed: `firebase deploy`
- Verify Firebase Firestore is enabled in your project
- Check Firestore rules allow read/write access

### Testing without Firebase
You can test the experiment components locally:
```bash
cd researcher_hub
python -c "from experiment_digit_memory import trial_sequence, stimulus_sequence; print('OK')"
```

## Need Help?

See the detailed documentation in:
- `DIGIT_MEMORY_README.md` - Full technical documentation
- `autora_workflow.py` - Commented workflow code
- `experiment_digit_memory.py` - Experiment implementation

## Key Features

βœ… **Closed-Loop**: Automatically generates new conditions based on model disagreement
βœ… **Multi-Model**: Compares Nuts, BMS, and Logistic Regression
βœ… **Firebase-Ready**: Deploys to web for online data collection
βœ… **Real-Time Feedback**: Participants see if they're correct immediately
βœ… **Robust**: Handles early termination and edge cases
βœ… **Documented**: Comprehensive README and inline comments

## What's Different from the Dots Experiment?

The digit memory experiment replaces the previous dots comparison experiment:

| Aspect | Dots Experiment | Digit Memory |
|--------|----------------|--------------|
| IVs | dots_left, dots_right (2D) | n_digits (1D) |
| DV | accuracy (equal/unequal) | accuracy (correct recall) |
| Display | Visual dots | Text digits |
| Task | Comparison | Memory recall |
| Duration | 2 seconds | 5 seconds |

All workflow components work the same way - only the experiment changed!
107 changes: 107 additions & 0 deletions closed_loop_autoscidis/researcher_hub/DIGIT_MEMORY_README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# Digit Memory Experiment

This directory contains the implementation of a digit memory experiment integrated into the closed-loop AutoRA workflow.

## Overview

The digit memory experiment measures participants' ability to remember sequences of digits. The experiment is parameterized by the number of digits (`n_digits`), which serves as the independent variable.

## Experiment Flow

1. **Display Phase** (5 seconds): A random sequence of digits is shown to the participant
2. **Recall Phase**: The participant types in the digits they remember
3. **Feedback Phase** (1 second): Immediate feedback ("Richtig!" or "Falsch.") is shown

## Files

### `experiment_digit_memory.py`

Contains two main functions:

- **`trial_sequence(number_of_trials, n_levels)`**: Generates a counterbalanced sequence of trials with varying `n_digits` values. Uses SweetPea if available, otherwise falls back to a simple balanced randomization.

- **`stimulus_sequence(trials)`**: Converts the trial sequence into JavaScript code that runs in the browser using jsPsych plugins.

### `preprocessing.py`

Contains the `digit_memory_to_experiment_data()` function that converts raw trial data into a pandas DataFrame with:
- **Independent variable**: `n_digits` (number of digits shown)
- **Dependent variable**: `accuracy` (0 or 1, whether the response was correct)

### `autora_workflow.py`

The main closed-loop workflow that:
1. Generates experiment conditions using experimentalists (random sampling, model disagreement)
2. Deploys experiments to Firebase
3. Collects participant data
4. Fits theoretical models (BMS, Nuts, Logistic Regression)
5. Uses models to generate new experiment conditions

## Integration with Testing Zone

The experiment requires the following jsPsych plugins (already added to `package.json`):
- `jspsych` (^7.3.0)
- `@jspsych/plugin-html-keyboard-response` (^1.1.0)
- `@jspsych/plugin-survey-html-form` (^1.0.0)

These are imported and exposed as globals in `testing_zone/src/design/main.js`:
- `initJsPsych`
- `jsPsychHtmlKeyboardResponse`
- `jsPsychSurveyHtmlForm`

## Data Collection

The experiment collects observations in the following format:

```json
{
"trials": [
{
"n_digits": 5,
"shown": "12345",
"response": "12345",
"correct": true
},
...
]
}
```

This data is:
1. Sent to Firebase Firestore (`autora_out` collection)
2. Downloaded by the AutoRA workflow
3. Preprocessed into experimental data
4. Used by theorists to fit models
5. Used by experimentalists to select new conditions

## Running the Workflow

1. Ensure Firebase credentials are in `firebase-service-account.json`
2. Run the workflow:
```bash
cd researcher_hub
python autora_workflow.py
```

The workflow will:
- Generate initial conditions using random sampling
- Upload experiments to Firebase
- Wait for participants to complete the experiment
- Collect and preprocess data
- Fit three types of models (Nuts, BMS, Logistic Regression)
- Use model disagreement to select new conditions
- Repeat for the specified number of cycles

## Configuration

Key parameters in `autora_workflow.py`:
- `num_cycles`: Number of closed-loop iterations (default: 2)
- `num_trials`: Trials per experiment run (default: 4)
- `num_conditions_per_cycle`: Distinct n_digits conditions per cycle (default: 1)
- `N_DIGITS_LEVELS`: Range of n_digits values (default: 3-9)

## Output

The workflow generates:
- `experiment_data.csv`: All collected experimental data
- `model_comparison.png`: Visualization comparing the three models
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading