Conversation
…'t have to add a dimension at the end for scalar operations
… fix norm func for r2_ij
…lematic in practice
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3426 +/- ##
==========================================
- Coverage 91.92% 91.79% -0.13%
==========================================
Files 37 37
Lines 32153 32352 +199
Branches 5143 5144 +1
==========================================
+ Hits 29556 29699 +143
- Misses 2264 2320 +56
Partials 333 333
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| PyObject *summary_func; | ||
| PyObject *norm_func; | ||
| } two_locus_general_stat_params; | ||
|
|
There was a problem hiding this comment.
I see that the CPython code coverage is split from the rest of the python tests. I can't test this directly without a small multiallelic test case in test_python_c.py.
There was a problem hiding this comment.
Yep, it looks that way. Can you set one up? You could do that by either having a small tree sequence with high enough mutation rate there are lots of multiple hits (we do use msprime in that file) or explicitly constructing the tables.
There was a problem hiding this comment.
I was thinking of something like this:
[ins] In [1]: import msprime
[ins] In [2]: ts = msprime.sim_ancestry(2, recombination_rate=0.1, sequence_length=100, random_seed=23)
...: ts = msprime.sim_mutations(ts, rate=0.1, discrete_genome=True, random_seed=23)
...: print(f"allele counts: {set(len(s.mutations) for s in ts.sites())}")
allele counts: {1, 2, 3}
[ins] In [3]: ts.num_samples, ts.num_trees, ts.num_sites, ts.num_edges
Out[3]: (4, 47, 46, 137)Which gives us a small multiallelic test case
| out: | ||
| return ret; | ||
| } | ||
|
|
There was a problem hiding this comment.
This function now serves as an inner wrapper. The the general stat accepts the summary function params so that the CPython code can pass them directly. All of the specialized stats functions call this function.
|
Thank you for opening this, @lkirk. I'm excited to see this implemented! Do we need to do any testing to demonstrate that there are no memory leaks or anything like that, which wouldn't be included in the test suite? I know it could be found in the tests, but it may be helpful to spell out here how the API would work for a two- or more-way stat? Would it be: |
|
Yes, this needs leak checking and documentation, which I can add. I mostly wanted to make sure the user interface made sense first.
The
|
|
Hi, @lkirk! This looks pretty straightforward. To have a careful opinion about the API, I think I need to see a reasonably careful docstring? For instance, what exactly are the arguments to |
| [ | ||
| ( | ||
| ts := [ | ||
| p for p in get_example_tree_sequences() if p.id == "n=100_m=32_rho=0.5" |
There was a problem hiding this comment.
Does this runs the risk of not being run at all if there is no such example tree sequence?
There was a problem hiding this comment.
I don't think so, I'm selecting the index [0] out of this, which will fail for trying to select the 0th element out of an empty list. It might be better to use the pytest_params=False now that I look at get_example_tree_sequences again. I could avoid the .value[0]
| ], | ||
| ) | ||
| def test_general_one_way_two_locus_stat_multiallelic(stat): | ||
| (ts,) = {t.id: t for t in get_example_tree_sequences()}["all_fields"].values |
There was a problem hiding this comment.
insert some things like assert ts.num_sites > 0 and probably something asserting that this one is actually multiallelic here
There was a problem hiding this comment.
Good call, I will add this. These test tree sequences are shared by many tests and the docstring for this tree doesn't explicitly state that it has multiallelic sites, so it's good to assert in case it changes.
| (ts, "pi2_unbiased"), | ||
| ], | ||
| ) | ||
| def test_general_two_locus_site_stat(ts, stat): |
There was a problem hiding this comment.
I think insert some asserts here that ts has the required properties for being a good test.
|
Hi @petrelharp, thanks for taking a look. I didn't offer much of a description before, does this help? Summary FunctionIn the
The I've been writing summary functions with the signature Where X is a matrix whose rows correspond to sample sets and columns correspond to haplotype counts ( Why lay the data out this way? Because numpy is row-major, iteration over the arrays gives us rows. That makes it easy to select the haplotype counts in one line: Note: under the hood, the data is still laid out in an optimal way (at least as far as I can tell -- see this note) In addition, the sample set sizes is shaped so that we can use it to normalize the counts in one go ( Finally, since we're no longer controlling the summary functions ourselves, the user has control over polarisation (see NormalisationPerhaps the most clunky thing about this api is that the user will need a normalisation function for multiallelic data (norm_f parameter). Its function signature is the same as the code internal to The default normalization function of I would have loved to be able to pass |
| sample_set_sizes, sample_sets, result_dim, f, f_params, norm_f, out_rows, | ||
| row_positions, out_cols, col_positions, options, result); | ||
| } | ||
| out: |
|
I've looked at the code pretty carefully, and things look good besides the comments. Nice work! |

Description
This is the last of the required components for the LD matrix methods. I wanted feedback on the API before I add documentation, but this method is complete and tested. The final things to do are to leak check the cpython code and add some documentation.
This feature enables a user to implement their own two-locus count statistic in python, similar to
ts.sample_count_stat. User functions take two arguments, the first is a matrix of haplotype counts and the second is a vector of sample set sizes. For instance, this is how we would implementDwith this api:Since this API supports multiallelic sites, the user can also pass a normalisation function to control how the data is normalised across multiple alleles. The normalisation function is only run when computing over multiallelic sites. I've set the default to be$1/(n_A n_B)$ , which is simply the arithmetic mean of the alleles in a given pair of sites. This will suffice in the majority of cases (the only outlier is $r^2$ , for which there is already a python API). We also support computing statistics between sample sets.
The user would use the above summary function like this:
Where
1specifies the length of the output array, we always require 1 dimension -- same as thets.sample_count_statfunction.PR Checklist: