Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Methodological Question #9

Open
arsisabelle opened this issue Jun 6, 2020 · 0 comments
Open

Methodological Question #9

arsisabelle opened this issue Jun 6, 2020 · 0 comments

Comments

@arsisabelle
Copy link

This sounds super interesting! And it is a really neat and useful idea!

I was just wondering, what variable you will categorize exactly with the ML classifier? If I understood well, you will compare the confusion index (decoding accuracy) and use this ratio as a dependent measure? Such, that you will be able to compare how the same dataset with the same ML procedures, might differ significantly or not? Or such that you would compare if the decoding accuracy correlates more to the (known) independent variable that is trying to predict? Or is it the preprocessing itself that you are trying to categorize and see if decoding accuracy would be better with one model than the other?

Just making sure I understood the theory behind it your comparisons, what you IVs and DVs are. Whichever the angle, the research questions behind this is super useful! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant