[40]:
import numpy as np
import matplotlib.pyplot as plt
import torch
from torchvision.transforms.functional import rotate
from pytomography.utils import rotate_detector_z
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

Last Time#

Finished implementing operation \(g=Hf\) and \(\hat{f} = H^T g\) for the simple case:

  • \(g = Hf\) (where \(f\) is obj and \(g\) is image)

[41]:
x = torch.linspace(-1,1,128)
xv, yv, zv = torch.meshgrid(x,x,x, indexing='ij')
obj = (xv**2 + 0.9*zv**2 < 0.5) * (torch.abs(yv)<0.8)
obj = obj.to(torch.float).unsqueeze(dim=0)

angles = np.arange(0,360.,3)
image = torch.zeros((1,len(angles),128,128))
for i,angle in enumerate(angles):
    object_i = rotate_detector_z(obj,angle)
    image[:,i] = object_i.sum(axis=1)
  • \(\hat{f} = H^T g\) (where \(\hat{f}\) is obj_bp)

[12]:
obj_bp = torch.zeros([1, 128, 128, 128])
for i,angle in enumerate(angles):
    bp_single_angle = torch.ones([1, 128, 128, 128]) * image[:,i].unsqueeze(dim=1)
    bp_single_angle = rotate_detector_z(bp_single_angle, angle, negative=True)
    obj_bp += bp_single_angle

This Time#

Let’s start making \(H\) more representative of true image modeling. In this tutorial, we’ll consider attenuation modeling; in the next tutorial, we’ll consider PSF modeling

“”

Suppose the green is atteunating material with linear attenuation coefficient \(\mu(x,y,z)\) at the energy of the emissions. The probability of the emission reaching the detector is the probability of it not being attenuated

\[p(x,y,z,\theta) = e^{-\int_l \mu(\vec{l}) \cdot d\vec{l}}\]

where the path \(l\) forms a perpendicular line from voxel \((x,y,z)\) to the detector at angle \(\theta\)

  • Thought: If an emission at (2,3,1) was going to yield 5 counts per second under no atteunation at angle \(20^{\circ}\), and \(p(2,3,1,20^{\circ}) = 0.2\), then it would instead receive 1 count per second under attenuation.

  • Thought: Our \(g=Hf\) above is implemented using \(H = \sum_{\theta} P(\theta) \otimes \hat{\theta}\). Maybe we can adjust for the probabilities before projecting to get \(H = \sum_{\theta} P(\theta) A(\theta) \otimes \hat{\theta}\) where \(A(\theta)\) is related to \(p(x,y,z,\theta)\). In other words, \(A(\theta)\) is used to adjust the voxel value of each voxel in \(f\) before forward projecting

Evaluating \(p(x,y,z,\theta)\)#

First lets make a CT object, which will be a small cylinder

[42]:
x = torch.linspace(-1,1,128)
xv, yv, zv = torch.meshgrid(x,x,x, indexing='ij')
mu = (xv**2 + 0.9*zv**2 < 0.3) * (torch.abs(yv)<0.6)
mu = mu.to(torch.float).unsqueeze(dim=0) * 0.1 #cm^-1

For the concept of linear attenuation coefficient, we need to specify the voxel dimension

[43]:
dx = 0.3 #cm

To compute \(p(x,y,z,\theta)\), we simply need to rotate the detector to angle \(\theta\) and then compute the integral

  • Example: \(10^{\circ}\)

[44]:
mu10 = rotate_detector_z(mu, angle=10)

def rev_cumsum(x: torch.Tensor):
    return torch.cumsum(x.flip(dims=(1,)), dim=1).flip(dims=(1,)) - x/2

p = torch.exp(-rev_cumsum(mu10 * dx))

The reverse cumulative sum is the opposite of the cumulative sum in that it starts at the RHS of the array as opposed to the LHS (required because the detector is at the RHS of the array).

Then we simply multiply the object by this tensor \(p\) and project to get our attenuation-corrected projection

[57]:
object_10 = rotate_detector_z(obj,10) * p
projection_10 = object_10.sum(axis=1)
[58]:
plt.pcolormesh(projection_10[0].T)
[58]:
<matplotlib.collections.QuadMesh at 0x7f24c35a0d00>
../_images/notebooks_dt3_20_1.png

Notice how the points in the center are now darker because they’re attenuated. To get the full image (collection of projections), we modify our \(g=Hf\) loop above

[59]:
x = torch.linspace(-1,1,128)
xv, yv, zv = torch.meshgrid(x,x,x, indexing='ij')
obj = (xv**2 + 0.9*zv**2 < 0.5) * (torch.abs(yv)<0.8)
obj = obj.to(torch.float).unsqueeze(dim=0)
mu = (xv**2 + 0.9*zv**2 < 0.3) * (torch.abs(yv)<0.6)
mu = mu.to(torch.float).unsqueeze(dim=0) * 0.1 #cm^-1

angles = np.arange(0,360.,3)
image = torch.zeros((1,len(angles),128,128))
for i,angle in enumerate(angles):
    mu_i = rotate_detector_z(mu, angle)
    p_i = torch.exp(-rev_cumsum(mu_i * dx))
    object_i = rotate_detector_z(obj,angle)
    image[:,i] = (object_i*p_i).sum(axis=1)

Now we can plot multiple projections:

[60]:
iis = [0,10,20,30,40,50,60]
fig, ax = plt.subplots(1,7,figsize=(10,2))
[a.pcolormesh(image[0,i].T) for (a, i) in zip(ax, iis)]
plt.show()
../_images/notebooks_dt3_24_0.png

Now we also need to implement it in back projection. Since

\[H = \sum_{\theta} P(\theta) A(\theta) \otimes \hat{\theta}\]

it follows that

\[H^T = \sum_{\theta} A^T(\theta)P^T(\theta) \otimes \hat{\theta}^T\]

\(A\) is effectively a diagonal matrix, so it doesn’t matter. In this case, the important thing is that we apply \(P^T\) first to go back to object space, then attenuate correct after.

[61]:
obj_bp = torch.zeros([1, 128, 128, 128])
for i,angle in enumerate(angles):
    obj_bp_i = torch.ones([1, 128, 128, 128]) * image[:,i].unsqueeze(dim=1)
    mu_i = rotate_detector_z(mu, angle)
    p_i = torch.exp(-rev_cumsum(mu_i * dx))
    obj_bp_i = obj_bp_i * p_i
    obj_bp_i = rotate_detector_z(obj_bp_i, angle, negative=True)
    obj_bp += obj_bp_i