Conversation
… solve no sources bug after screening
Signed-off-by: code_reformat <>
|
Based on our conversation last week, it would be helpful for the error messages to say which source model was the cause of the problem when running pyELQ with multiple source maps. I guess this would be done in ELQModel.initialise, since that's where the source models have corresponding keys. If the key was at the beginning of the error message that would be helpful for filtering out source models that are causing problems in a run. |
I've updated the error message in generate_sources to include the source map label. I think this will achieve what you are asking for if you have multiple source maps. I also cut the number of iterations of the while loop to 50 since you were concerned about runtime. I could maybe make that a user defined variable? |
Yes, that will work. I forgot that the source models have a label within them. I have tested the solution and it only took about 2 seconds to generate the sources 50 times, which is less than I expected. Since it's quite negligible I think there's no real reason for the user to want to change it |
Description
Added function to source_model to be able to generate sources consistent with coverage map to avoid issues where initial random source gets screened out leaving no sources and the code crashing.
Function puts existing approach inside of a while loop repeatedly trying to propose source 100 times until at least one source is inside the coverage. If 100 proposals are reach the code will error (I think if this happens probably there is something fundamentally wrong with the input data).
The existing approach of generating sources is unaffected so this is not a breaking change just an improved approach if the user wants to use it.
Fixes # (issue)
Add function in source_model - generate_sources that will generate sources consistent with the coverage and is compatible with both Gaussian plume and Finite Volume
Refactored initialise_dispersion_model to be compatible with the new function and more general to cover the FE case.
Updated example notebooks to call new function where relevant instead of old method
Added test function to test new function.
Please delete options that are not relevant.
Jupyter Notebooks
Examples updated where sources were previously generating using random sampling unrelated to the coverage map. I've not changed "true" source model generation.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Checklist: