Hi Mr. Jiang,
Your method looks very interesting to me. I have a few Whole Slide Images that I stitched together using their FOVs, so I already have access to the individual FOVs as well.
From what I understand, your model requires both marked and clean image pairs for training. Unfortunately, I don't have clean versions of my test data. However, I was thinking of generating synthetic training data by adding marker annotations on top of my clean FOVs and training the model that way.
Before I proceed, I wanted to ask if you already have a pretrained model available that I could use directly on my data. If not, I’ll go ahead with creating the training dataset myself as described above.
Also, I noticed that staining intensity varies across my WSIs, and I’m concerned this might affect performance, especially if the model is trained on lighter-stained images and tested on darker ones. Have you encountered this issue, and do you have any recommendations for handling it?
Thanks!
Hi Mr. Jiang,
Your method looks very interesting to me. I have a few Whole Slide Images that I stitched together using their FOVs, so I already have access to the individual FOVs as well.
From what I understand, your model requires both marked and clean image pairs for training. Unfortunately, I don't have clean versions of my test data. However, I was thinking of generating synthetic training data by adding marker annotations on top of my clean FOVs and training the model that way.
Before I proceed, I wanted to ask if you already have a pretrained model available that I could use directly on my data. If not, I’ll go ahead with creating the training dataset myself as described above.
Also, I noticed that staining intensity varies across my WSIs, and I’m concerned this might affect performance, especially if the model is trained on lighter-stained images and tested on darker ones. Have you encountered this issue, and do you have any recommendations for handling it?
Thanks!