You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/2.7 MB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[91m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m \u001b[32m2.7/2.7 MB\u001b[0m \u001b[31m102.2 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.7/2.7 MB\u001b[0m \u001b[31m45.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
35
+
"\u001b[?25h\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/70.2 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.2/70.2 kB\u001b[0m \u001b[31m4.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
36
+
"\u001b[?25h"
37
+
]
38
+
}
39
+
],
40
+
"source": [
41
+
"!pip install -q monai torch-pruning\n"
42
+
]
43
+
},
44
+
{
45
+
"cell_type": "markdown",
46
+
"source": [
47
+
"# Structured Pruning of U-Net for Medical Image Segmentation\n",
48
+
"\n",
49
+
"This tutorial demonstrates how structured channel pruning can be applied to a MONAI U-Net model to reduce model size and computation, while maintaining segmentation capability.\n"
"<frozen importlib._bootstrap_external>:1301: FutureWarning: The cuda.cudart module is deprecated and will be removed in a future release, please switch to use the cuda.bindings.runtime module instead.\n"
77
+
]
78
+
}
79
+
]
80
+
},
81
+
{
82
+
"cell_type": "code",
83
+
"source": [
84
+
"torch.manual_seed(0)\n",
85
+
"np.random.seed(0)\n"
86
+
],
87
+
"metadata": {
88
+
"id": "QPj1Qficd8v4"
89
+
},
90
+
"execution_count": 3,
91
+
"outputs": []
92
+
},
93
+
{
94
+
"cell_type": "code",
95
+
"source": [
96
+
"images = torch.rand(4, 1, 128, 128)\n",
97
+
"labels = (images > 0.5).float()\n"
98
+
],
99
+
"metadata": {
100
+
"id": "APIhPidJd9zv"
101
+
},
102
+
"execution_count": 4,
103
+
"outputs": []
104
+
},
105
+
{
106
+
"cell_type": "code",
107
+
"source": [
108
+
"def count_params(model):\n",
109
+
" return sum(p.numel() for p in model.parameters())\n"
"Reducing the depth of a U-Net architecture leads to a true reduction in the number of learnable parameters, unlike masking-based pruning approaches that preserve tensor shapes.\n",
258
+
"\n",
259
+
"Depth reduction decreases representational capacity and receptive field size, which may affect segmentation accuracy. However, for many medical imaging applications—especially those targeting edge devices or real-time inference—this trade-off is acceptable and often desirable.\n",
260
+
"\n",
261
+
"This approach provides a simple, stable, and reproducible strategy for building lightweight medical imaging models.\n"
0 commit comments