Conv padding: remove string values, move to TF-style values#62
Conv padding: remove string values, move to TF-style values#62stevenabreu7 wants to merge 2 commits intomainfrom
Conversation
|
I don't really see the issue here about the string paddings. Would you like the string padding values left out for reduced complexity? |
|
Yeah, supporting string padding is not really necessary (it's a higher level of abstraction, and an IR is not the place for that), plus it adds more complexity. It also becomes a bit arbitrary what string values we support - there are many: same, valid, full, wrap, reflect, constant, mirror, nearest, etc. I also agree with TensorFlow-style padding. We can still allow for PyTorch-style padding, and just cast that to the TensorFlow-style in the |
|
Is this still a feature we would want to have? I agree that it's a high level feature, but it's pretty convenient and I imagine that a lot of users rely on a 1:1 conversion from PyTorch. An alternative could also be that we use some kind of wrapper function which converts "same" to a specific (lower-level) set of parameters. Thoughts? @matjobst @stevenabreu7 |
As pointed out by Terry, there is no need to support string padding values such as "same" or "valid" as "valid" is equivalent to padding=0, and "same" cannot always be implemented correctly using the PyTorch-style padding that we're currently using.
Question: should we move to Tensorflow-style padding, where the padding at the start and end of each dimension may be different? I.e. instead of
(pad_x, pad_y), we specify((pad_top, pad_bottom), (pad_left, pad_right))?