-
Notifications
You must be signed in to change notification settings - Fork 211
Dispose implicitly converted TorchSharp.Scalar and torch.Tensor #1496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
hiyuh
wants to merge
101
commits into
dotnet:main
Choose a base branch
from
hacarus:dispose-implicitly-converted-scalar-tensor
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Dispose implicitly converted TorchSharp.Scalar and torch.Tensor #1496
hiyuh
wants to merge
101
commits into
dotnet:main
from
hacarus:dispose-implicitly-converted-scalar-tensor
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Otherwise, dotnet build will fail.
In the following case, at least 266 exceptions are observed. * allowImplicitConversionOperator = false * dotnet test /p:SkipCuda=true /p:SkipNetFxBuild=true --blame test\TorchSharpTest\TorchSharpTest.csproj -c Release * Update src/TorchSharp/Scalar.cs. + Introduce ScalarLeakDetector. + Update Scalar. - Use ScalarLeakDetector.ThrowIfImplicitConversionNotAllowed.
In the following case, at least 45 exceptions are observed. * allowImplicitConversionOperator = false * dotnet test /p:SkipCuda=true /p:SkipNetFxBuild=true --blame test\TorchSharpTest\TorchSharpTest.csproj -c Release * Update src/TorchSharp/Tensor/Tensor.cs + Introduce TensorLeakDetector. + Update Tensor. - Use TensorLeakDetector.ThrowIfImplicitConversionNotAllowed.
* Update src/TorchSharp/Tensor/Tensor.Operators.cs. + Declare TorchSharp.Scalar more explicitly. - Use prefix for left or right. - Call ToScalar explicitly.
* Update src/TorchSharp/Tensor/Tensor.Math.cs. + Declare TorchSharp.Scalar explicitly. + Add FIXMEs.
* Update src/TorchSharp/Optimizers/Adadelta.cs. + Update Adadelta.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0 explicitly.
* Update src/TorchSharp/Optimizers/Adagrad.cs. + Update Adagrad.step. + Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. + Add FIXME for possible unsued initial_accumulator_value. + Cache weight_decay != 0.
* Update src/TorchSharp/Optimizers/Adam.cs. + Update Adam.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for possible no denom disposing.
* Update src/TorchSharp/Optimizers/Adamax.cs. + Update Adamax.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for CA1806.
* Update src/TorchSharp/Optimizers/ASGD.cs. + Update ASGD.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unsed weight_decay_scalar. - Cache weight_decay != 0.
* Update src/TorchSharp/Optimizers/NAdam.cs. + Update NAdam.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for possible no denom disposing.
* Update src/TorchSharp/Optimizers/RAdam.cs. + Update RAdam.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused weight_decay_scalar. - Cache weight_decay != 0. - Add FIXME for possible torch.Tensor.sub_ use. - Add FIXME for possible no dispose for torch.Tensor. - bias_corrected_exp_avg - t6 - adaptive_lr and its intermediates and derives - Add FIXME for possible no dispose on param.add_ if rho_t > 5.
* Update src/TorchSharp/Optimizers/RMSprop.cs. + Update RMSProp.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible unused momentum_scalar. - Add FIXME for possible unused weight_decay_scalar. - Cache momentum > 0. - Cache weight_decay != 0. - Add FIXME for possible no avg dispose.
* Update src/TorchSharp/Optimizers/SGD.cs. + Update SGD.step. - Declare TorchSharp.Scalar explicitly. - Cache momentum != 0. - Cache dampening != 1. - Cache weight_decay != 0. - Omit unused TorchSharp.Scalar construction.
4a7cbd2
to
64520bc
Compare
* Update src/TorchAudio/Functional.cs. + Update griffinlim. - Declare TorchSharp.Scalar explicitly. - Introduce eps_scalar.
* Update src/TorchSharp/Optimizers/AdamW.cs. + Update AdamW.step. - Declare TorchSharp.Scalar explicitly. - Dispose denom explicitly.
* Update src/TorchSharp/NN/Normalization/BatchNorm.cs. + Update BatchNorm.forward. - Declare TorchSharp.Scalar explicitly. - Add FIXME for cache over training.
* Update src/TorchSharp/Tensor/Factories/Tensor.Factories.cs. + Update torch.normal. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchVision/Utils.cs. + Update torchvision.utils.save_image. - Declare TorchSharp.Scalar explicitly. - Add FIXME for possible torch.Tensor.round_ use. - Add FIXME for no torch.min_int_value.
* Update src/TorchSharp/Optimizers/Rprop.cs. + Update Rprop.step. - Declare TorchSharp.Scalar explicitly. - Add FIXME for unused lr. - Add FIXME for possible torch.Tensor.sign_ use. - Cache eta{minus,plus} and 1 as torch.Tensor.
* Update src/TorchVision/Ops/StochasticDepth.cs. + Update torchvision.ops.stochastic_depth. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchVision/Utils.cs. + Update torchvision.utils.make_grid. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchSharp/Distributions/Constraints.cs. + Update torch.distributions.constraints._PositiveSemiDefinite.check. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchSharp/Distributions/Constraints.cs. + Update torch.distributions.constraints._CorrCholesky.check. - Declare TorchSharp.Scalar explicitly.
* Update src/Examples/SequenceToSequence.cs. + Update TransformerModel.GenerateSquareSubsequentMask. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchAudio/Modules/Tacotron2.cs. + Update TorchSharp.Modules.Tacotron2.forward. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchAudio/Modules/Tacotron2.cs. + Update TorchSharp.Modules.Tacotron2.Attention.forward. - Declare TorchSharp.Scalar explicitly.
* Update src/TorchSharp/Distributions/NegativeBinomial.cs. + Update TorchSharp.Modules.NegativeBinomial.log_prob. - Declare TorchSharp.Scalar explicitly.
This is an exceptional change. Since comparison operators are widely used, I'd have to give up fix one by one for now. Introducing operators take {int,long,float,double} would cover most cases to prevent missing TorchSharp.Scalar.Dispose. However, these would leave TorchSharp.Scalar construction cost as is. * Update src/TorchSharp/Tensor/Tensor.cs. + Add more operator ==. + Add more operator !=. + Add more operator <. + Add more operator <=. + Add more operator >. + Add more operator >=.
Support other types which are covered by implicit conversion to TorchSharp.Scalar; byte, sbyte, short, Half, bool, (float, float), System.Numerics.Complex. * Update src/TorchSharp/Tensor/Tensor.cs. + Add more operator ==. + Add more operator !=. + Add more operator <. + Add more operator <=. + Add more operator >. + Add more operator >=.
* Update src/TorchSharp/Tensor/Tensor.cs. + Call PrintValue w/ explicitly declared TorchSharp.Scalar.
* Update src/TorchSharp/Tensor/TensorExtensionMethods.cs. + Update TensorExtensionMethods.To*(this Tensor value). - Declare TorchSharp.Scalar explicitly.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.fill_ overloads.
* Update src/TorchSharp/Optimizers/Rprop.cs. + Update TorchSharp.Modules.Rprop.step. - Cosmetic. - Use torch.Tensor.masked_fill_. + Update TorchSharp.Modules.Rprop.State.Initialize. - Use a torch.Tensor.fill_ overload.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.index_put_ overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.index_add{,_} overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.index_fill{,_} overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.threshold{,_} overloads.
* Update src/TorchSharp/NN/Activation/Threshold.cs. + Update torch.nn.functional.threshold. - Use torch.Tensor.threshold{,_} overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.softplus overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Add more torch.Tensor.celu{,_} overloads.
* Update src/TorchSharp/NN/Activation/CELU.cs. + Update torch.nn.functional.celu. - Use torch.Tensor.celu{,_} overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Add more torch.Tensor.elu{,_} overloads.
* Update src/TorchSharp/NN/Activation/Hardtanh.cs. + Introduce torch.Tensor.hardtanh{,_} overloads.
* Update src/TorchSharp/NN/Activation/Hardtanh.cs. + Update torch.nn.functional.hardtanh. - Use torch.Tensor.hardtanh{,_} overloads.
* Update src/TorchSharp/Tensor/Tensor.cs. + Introduce torch.Tensor.leaky_relu{,_} overloads.
* Update src/TorchSharp/NN/Activation/LeakyReLU.cs. + Update torch.nn.functional.leaky_relu. - Use torch.Tensor.leaky_relu{,_} overloads.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Scalar
s implicitly created inTensor
operators #1434.TorchSharp.{Scalar,Tensor}LeakDetector
can throw exceptions on implicit conversion.{TorchSharp.Scalar,torch.Tensor}.Dispose
.{TorchSharp.Scalar,torch.Tensor}.Dispose
in TorchSharp is on going.