We might need some basic checks whether a correspondence pattern analysis is useful, since I detected one pattern that causes huge problems:
{'ID': [365, 371, 370, 367, 364, 369, 368, 366, 362],
'taxa': ['Hachijo',
'Hachijo',
'Kagoshima',
'Kochi',
'Kyoto',
'Oki',
'Sado',
'Shuri',
'Tokyo'],
'seqs': [['k', 'iː', '-', '-', '-', '-'],
['k', 'e', 'b', 'u', 'ɕ', 'o'],
['k', 'e', '-', '-', '-', 'i'],
['k', 'e', '-', '-', '-', '-'],
['k', 'eː', '-', '-', '-', '-'],
['k', 'e', '-', '-', '-', '-'],
['k', 'e', '-', '-', '-', '-'],
['k', 'iː', '-', '-', '-', '-'],
['k', 'e', '-', '-', '-', '-']],
'alignment': [['k', 'iː', '-', '-', '-', '-'],
['k', 'e', 'b', 'u', 'ɕ', 'o'],
['k', 'e', '-', '-', '-', 'i'],
['k', 'e', '-', '-', '-', '-'],
['k', 'eː', '-', '-', '-', '-'],
['k', 'e', '-', '-', '-', '-'],
['k', 'e', '-', '-', '-', '-'],
['k', 'iː', '-', '-', '-', '-'],
['k', 'e', '-', '-', '-', '-']],
'dataset': 'japonic',
'seq_id': '449 ("hair")'}
Here, we have two words from Hachijo in the same cognate sets, but they differ (!). We can argue that for correspondence patterns, it is impossible for strictly cognate words to differ. So a preprocessing can in fact arbitrarily decide for one of them.
We might need some basic checks whether a correspondence pattern analysis is useful, since I detected one pattern that causes huge problems:
{'ID': [365, 371, 370, 367, 364, 369, 368, 366, 362], 'taxa': ['Hachijo', 'Hachijo', 'Kagoshima', 'Kochi', 'Kyoto', 'Oki', 'Sado', 'Shuri', 'Tokyo'], 'seqs': [['k', 'iː', '-', '-', '-', '-'], ['k', 'e', 'b', 'u', 'ɕ', 'o'], ['k', 'e', '-', '-', '-', 'i'], ['k', 'e', '-', '-', '-', '-'], ['k', 'eː', '-', '-', '-', '-'], ['k', 'e', '-', '-', '-', '-'], ['k', 'e', '-', '-', '-', '-'], ['k', 'iː', '-', '-', '-', '-'], ['k', 'e', '-', '-', '-', '-']], 'alignment': [['k', 'iː', '-', '-', '-', '-'], ['k', 'e', 'b', 'u', 'ɕ', 'o'], ['k', 'e', '-', '-', '-', 'i'], ['k', 'e', '-', '-', '-', '-'], ['k', 'eː', '-', '-', '-', '-'], ['k', 'e', '-', '-', '-', '-'], ['k', 'e', '-', '-', '-', '-'], ['k', 'iː', '-', '-', '-', '-'], ['k', 'e', '-', '-', '-', '-']], 'dataset': 'japonic', 'seq_id': '449 ("hair")'}Here, we have two words from Hachijo in the same cognate sets, but they differ (!). We can argue that for correspondence patterns, it is impossible for strictly cognate words to differ. So a preprocessing can in fact arbitrarily decide for one of them.