|
242 | 242 | <div class="pytorch-left-menu-search"> |
243 | 243 |
|
244 | 244 | <div class="version"> |
245 | | - <a href='https://pytorch.org/docs/versions.html'>main (2.3.0a0+gitf76e541 ) ▼</a> |
| 245 | + <a href='https://pytorch.org/docs/versions.html'>main (2.3.0a0+gitca96784 ) ▼</a> |
246 | 246 | </div> |
247 | 247 |
|
248 | 248 |
|
@@ -1364,7 +1364,7 @@ <h1>Source code for torch</h1><div class="highlight"><pre> |
1364 | 1364 | <span class="sd"> A handful of CUDA operations are nondeterministic if the CUDA version is</span> |
1365 | 1365 | <span class="sd"> 10.2 or greater, unless the environment variable ``CUBLAS_WORKSPACE_CONFIG=:4096:8``</span> |
1366 | 1366 | <span class="sd"> or ``CUBLAS_WORKSPACE_CONFIG=:16:8`` is set. See the CUDA documentation for more</span> |
1367 | | -<span class="sd"> details: `<https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility>`_</span> |
| 1367 | +<span class="sd"> details: `<https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility>`_</span> |
1368 | 1368 | <span class="sd"> If one of these environment variable configurations is not set, a :class:`RuntimeError`</span> |
1369 | 1369 | <span class="sd"> will be raised from these operations when called with CUDA tensors:</span> |
1370 | 1370 |
|
@@ -2247,6 +2247,10 @@ <h1>Source code for torch</h1><div class="highlight"><pre> |
2247 | 2247 | <span class="k">def</span> <span class="fm">__call__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">model_</span><span class="p">,</span> <span class="n">inputs_</span><span class="p">):</span> |
2248 | 2248 | <span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">compiler_fn</span><span class="p">(</span><span class="n">model_</span><span class="p">,</span> <span class="n">inputs_</span><span class="p">,</span> <span class="o">**</span><span class="bp">self</span><span class="o">.</span><span class="n">kwargs</span><span class="p">)</span> |
2249 | 2249 |
|
| 2250 | + <span class="k">def</span> <span class="nf">reset</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span> |
| 2251 | + <span class="k">if</span> <span class="nb">hasattr</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">compiler_fn</span><span class="p">,</span> <span class="s2">"reset"</span><span class="p">):</span> |
| 2252 | + <span class="bp">self</span><span class="o">.</span><span class="n">compiler_fn</span><span class="o">.</span><span class="n">reset</span><span class="p">()</span> |
| 2253 | + |
2250 | 2254 |
|
2251 | 2255 | <div class="viewcode-block" id="compile"><a class="viewcode-back" href="../generated/torch.compile.html#torch.compile">[docs]</a><span class="k">def</span> <span class="nf">compile</span><span class="p">(</span><span class="n">model</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">Callable</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">,</span> <span class="o">*</span><span class="p">,</span> |
2252 | 2256 | <span class="n">fullgraph</span><span class="p">:</span> <span class="n">builtins</span><span class="o">.</span><span class="n">bool</span> <span class="o">=</span> <span class="kc">False</span><span class="p">,</span> |
|
0 commit comments