- userLoginStatus
Welcome
Our website is made possible by displaying online advertisements to our visitors.
Please disable your ad blocker to continue.
Biomedical Engineering - Methods for biomedical imaging and computer aided surgery
Completed notes of the course
Complete course
COMPUTED AIDED SURGERY XRAY Roentgen found X-rays by chance making electrical experiments. He called x because not know what they are, a new form of energy. The found medical application of this new scoperta very soon. X rays are located in the spectrum of electromagnetic radiation, they are high frequency and small wave length that can penetrate easily the human body being attenuated in deferent way by the the di fferent living organism (bone, soft tissue (like water), and air). Xray tube (Roentgen tube). We need to have high energy electrons colliding against a target with very high atomic number (tungstenus usually). In order to have this collision that in the end produce the X-ray exiting the tube we need to exite applying potential di fference between the cathode and anod. The cathode will be a filament where we produce a cloud of electron by termoionic excitation. The anod is the high atomi c number element. We apply a potetional di fference of a lot of kV, the anod is typically put in rotation because the collision produce a lot of heat so putting it in rotation we distribuite the heat in the all target. According to what we need to have everything is putted in a vacuum tube where everything takes place. The e fficiency is very low and proportional to the potential difference applied and the atomic number. We need to cool down all the machine otherwise everything will burn down. Potential di fference (100 kV) applied and energy of X-rays we obtain, once filtered, have a relationship -> and it’s expressed in electron V. If I apply 100kV I will obtain a X-ray beem with an average on the spectrum if 100 KeV which is typically the energy required to penetrate the human body structure because they have to pass the human body and enter the hit the detector in order the map. Photo electric e ffect Happens mostly when there is an interaction between a low energy X-ray photons with the atoms of the biological matter. It expels one of the external electron in the orbital of the atom, the expelled electron is a source of noise that we want to limit as much as possibile. This expelled electron will go and interact with other one that pass nearby. This electrons will be scattered around with no prede fined direction and will potentially reach our detector with a very strange direction (not useful to create the attenuation map we want to form. This fenomena can potential decrease the S/N ratio of the map we want to create. Compton e ffect The energy of the photon is a little bit higher and so the interaction with biological matter will cause a double e ffect of a collision of another photon and an electron. With lower energy the other photon but causing new collision causing scattered radiation. Higher is the enrgergy higher will be the secondary radiation produced. Pair production The interaction gives rise to the emission of one position and one electron in the biological matter. They will interact immediately producing a ! photon. This combination and the production of an high energy photon will cause a lot of interaction and secondary radiation. Luckly this is not the main interaction. The dominance depend on the energy of photon in Mega electron Volt. Compton is dominant in the level of energy imaging we use, both with photo electric e ffect. We need to limit as much as possible the secondary radiation also from an ethical point of view of the patient because in became ion, so ionization that can be dangerous for the human body changing the DNA. The law that describe the phenomena is the Lambert-Beer law, in the negative expense there are the physical characteristics of the material, the physical density and the attenuation/absorption coe fficient. So when the X- ray beam across the water it will be attenuated in a very speci fic way due to the phial property of the water, If it cross bone will be attenuated much higher, if air will be attenuated much less. 50-70 V of potential di fference are applied in the X-ray tube. The absorption coe fficient depend upon the energy of the X-ray beam while obviously the physical density remains the same. If the energy is too high it doesn’t feel any difference between if it cross bone or water. We do not want this because we want a di fferential absorption coe fficient so we’ll never use 10 MeV. While if we use a lower energy there is a point where the two are very far apart and that what we want to use. Schematic representation. Tube - filter to cut out low energy photon (enter the human body and stop in the human body, whit not enough energy to reach the detector and are harmful and source of secondary radiation) - collimator is a geometrical filter of piombum (high atomic number) able to stop photon outside the shape - primary radiation is the useful source of information, they have enough energy to pass the human body and hut the detector - secondary radiation source of noise - grid before the doctor, where there are channels oriented in which you can decompose the X-ray Beam where mainly the primary will be able to get in and cross this Beam. All is made to increase the S/N ratio and to reduce the secondary radiation as enemy for the body and for the attenuation map we want to form. Xray are still really used in the intra-operative phase both in 2D and 2D plus time. C arms system where there is the tube and detector at two estremi, the doctor is substitute to digital panels because are faster and last more. Digital flat panels implemented in the commercial system. The impinging X-rays will excite a matrix of scintillating crystals producing and converting the X-ray beams into visible light with an intensity dependent of the energy of the impinging X-ray. This visible light will excite a matrix of Photodiode (in amorphous silicon) converting it into electing energy. And this is a TFT array of the digital image we see. Flat panel detector is 40x30 mm and it’s responsible for detection. The number of pixel we obtain are 2 thousand x 1.5 thousand pixels. Very fast and very accurate in terms of spatial resolution. They represent the high level standard for x ray image for intra operative device. Fluoroscopy leaves the tube open the all time, it’s not just a shot, but a continuous emission of the x ray beam and reading through the detector up to 20-25 Hz. It is used to track the motion of something in the human body. COMPUTERIZED tomography (CT) Also called computer axial tomography. Patient in the middle of a gantry, which is structure of the ct where there is an x ray tube and a matrix of detectors that within the X-ray tube, always on, rotate around the patient. They collect a map of the attenuation of the beam continuously collimated with the shape of a fan by the rotation. Every attenuation is collected in correspondence of the thin region of interest inside the patients and the theta are so redundant that this information can be converted in a 3dimensional matrix where the information can be condensed. After one rotation the patient have to be moved in order to collect another region and the process starts again till we collect all the region we are interested in the axial axis of the patient. Traditionally the acquisition was made steps and shoot. The couch was moved, one rotation is performed and then it move and the other rotation start but this is overcome due to the quality of the image (couch motion and duration of examination. Now are all elichal scanner, the couch moves inside the scanner and and the rotation of the X-ray tube and the detector is kept alive along all the time. This makes the rotation faster and got rid of the inertial e ffect of stopping and and starting the rotation of tube and detector. The modern units are multi slice so for every rotation I acquire not only one slice but information in order to obtain multi slice. In the most modern I can decide one many slices I want to obtain and reconstruct for every rotation as a function of spatial resolution I will obtain at the end of examination. Higher is the number of contemporary slices that I can acquire with one single rotation, the lower will be the spacial resolution, so higher will be the dimension of the voxels in the axial direction I will obtain. Thickness of the slice 5 mm means dimension of voxel in the axial direction 5 mm, if I reduce the number of slice I will have less slices acquired contemporary but on a better spatial resolution. The number of slices is a parameter on which you can play. If I need to perform a CT only for diagnosis I will not care of spatial resolution. While if I need to locate accurate where the lesion is, to describe accurately the lesion, structures I want to avoid etc I have to go down to 2,3 mm of spatial resolution gaining in geometrical accuracy having more accurate map. I need to decide a trade o ff between the way in which I scan my patient and the dose of the radiation that I will expose my patient. Everything will be de fined at the level of spatial resolution I decide, if I use 1 mm there will be an error of 1 mm because I can’t perceive what under that value. When we look at a slice of CT we look into 2D, a bidimensional pixel of different grey level volume according to the map of the software, but intrinsically is 3D even if it’s displaced in 2D in the display. The physical density of the tissue is the information that is converted in the color, this information is included in the voxel corresponding to that speci fic pixel. If I take one pixel there will be a certain grey level, that is displayed as a function of 3D information, physical density or attenuation property of biological matter). The amount of material in the voxel is centered (condensed) in the voxel, it will be more bigger is the voxel dimension and lower will be the spatial resolution. That is why the CT and MRI are intrinsically 3D imaging (measurement) technology, a slice of the CT is a measurement of the phyla density inside my patient expressed in 3D. Typical the dimensions of voxels are 0.98x0.98 (physical dimension of the pixel, design of scanner) x3 (direction of axial dimension, goes down till 1 mm). These dimension will fix the lowest threshold of the design I can go, every time I will contour segmented structure in 3D passing through centers of voxels there is a level of uncertainty that is the dimension of the voxel (lower limit under which I can not go). Unità di Houns field The physical density of the tissue is converted in CT numbers, hounsfeidl units. For normalization are obtained by the coe fficient of absorption of the material contained in the voxel of the ct. I will always talks of Houns field unit because of normalization. In CT we can differentiate below di fferent structure. We can play around with the transfer function between the Houns field unit and the range of Houns field unit in a speci fic slice in order to obtain a more or less contrasted image representation in order to give rise a better representation. Restricintign the window and increasing the range of the level we want to represent, we loose everything that is inside the lungs (black) but we gain in compact bone and sponges bone representation and soft tissue representation. CT recap Acquisition time is 0.5-1 sec/slice. Multislice from 4 to 256. The number of slide depend on the slice thickness I need to pick for my examination. There exist some intra operative units. The idea is to acquire the CT of the patient immediately before the radiation. For radiation oncology the therapy plan is performed some days before using the CT acquisition. CT for radiation oncology is mandatory and is the reference. To the time the first radiation is deliver to the patient there are a lot of uncertainty, variation of anatomical con figuration involved in the therapy that may occur in the patient. Being conscious of this variation I’ll have CT unit using before radiation, to acquire information, in order to have the material, the reference I need of the same match of information of treatment planning to be compared one against the other. In the example is the gantry that slide along the patient in a fixed imaging position (CT on wheel). The approach is : I am able to acquire a complete CT immediately before the radiation, that is the same information on which I planned the therapy. The comparison of the two CT is the deviation of the pathology that needs to be assessed at the time of the irradiation in order to take a decision. If the size of the deviation is too high there can be applied adaptation strategy, all of this when I compare the planning CT and the one a I just acquire on the patient. Cone Beam CT Is a method of replacing the using of a CT in a surgical room or inside a therapy bunker with an easier way to acquire a volumetric information similar to a CT. It is a CT acquired with a conic geometry. We can still recognize anatomical structure, but we can see the level of superimposition before giving ok to the radiation. There are limits in terms of image quality and region of view, the cone beam cut a lot of information, the quality of the image is not really high, there are artifacts of the image. The image is 3D, like a CT but with a lower quality. The cone beam acquire a conic geometry is conventional of radiation beam while the CT is a thin blade of prede fined thickness (that give the thickness of the slice) where we play on this parameter. In cone beam I invest a much larger portion of the patient since the the conic convention -> I create much more secondary radiation, noise the result is a decrease in quality. But due to the fact that is much more handy (just a panel, a x-ray tube and a linear accelerator capable of rotating around the patient) it could provide a 3D information before or during surgical procedure it makes it very interesting. Conventional CT have a thin blade passing through the patient, multi slice more but limited, while in the cone beam a large section pass across the patient causing higher noise in the image reconstruction. There is a problem in the field of view, I have a fixed geometry, the size of the field of view is the cylinder formed by the entry points of the patient and then going down till the end of him. To increase the extension of the filed of view of the cone beam is called geometry of Half-Fan, compared to the geometry in Full-Fan. The second is about collimate the beam so that they exit the patient and heat the detector covering the useful area, in this case we can not go larger than the physical dimension do the radiation beam when crossing the patient. Half fan displace the detector physically in such way that if I collimate half of the beam in this direction, I’m able to hit the entire 40 by 30 detector with half of the beam because I displaced the detector in order to increase the field of view (portion hit of the patient) towards one direction, then I have a 360 degree revolution of this couple around the patient. For half of the rotation the portion of the patient sampled by the radiation beam will be the one captured, for the renaming half will be the other half of the patient that needs to be hitten. So exploting the combination of the two 180 degreee rotation with this displaced detector and half collimated radiation beam I will be able to double, more or less, the field of view on the axial plane. Motion Artifacts CT CT is the reference tech intrinsically 3D imaging tech that builds up grey volumes, smaller is the slice thickness higher is the spatial resolution and better will be the sampling and more accurate is the representation of the anatomy of the patient. When I acquire CT for surgical planning I have to restrict slice thickness down to 1 mm or 0.7 mm because I want to be really accurate de fining the surgical plan. Everything in the framework is based on the hypothesis that the image we acquire is representative of the anatomy of the patient at time of acquisition and this representation remains valid at time of treatment delivery of surgery. This is not true because the patient will vary from when we acquire the image to the time in which we deliver the treatment on the planned model and then there might be an anatomical variation during the image acquisition and uncertainty variation in the anatomy of the patient at time of surgical performance. We need to understand the size of variations to reduce them and to understand the source of inaccuracy that will feed the safety margin. The CT last some minutes, the motion due the respiratory is the unique and main source of deviation. If we ignore this, the result is as the image (presence of artifacts) where lungs and other organs become not regular with a geometrical inaccuracy. CT with respiratory gating is a way to reduce noise synchronizing the acquisition with respiratory motion. Respiratory correlated imaging (4D-CT) provide acquisition in time, so in respiration. It’s a reconstruction during one single breathing act (combination of multiple breathing act) seeing the path of our struct of interest. CT is fundamental for radiation oncology, it’s a complex process and there will never be a treatment without a planning CT acquisition. The decision on the used technique depends and must re flect on the treatment/surgery plan. SLOW CT Is a technique that needs to complete the acquisition of free breathing without any suppression of noise that allow to visualize the envelope of the position of the structure during one respiratory cycle. Acquire the CT in two ways possible: -slow motion of the couch -Acquire each single slice multiple times in order to sample in correspondence of di fferent respiratory phases, then average images. The image of respiratory motion result will be a ffected by artifacts (blur on the image), in this way the operator can understand the extend of position in respiratory phase of the patient. The idea of slow CT is to enhance the motion artifacts so they are very visibile on the resulting grey level volume and measured. This blurred target will be enough to contain all the position that this lesion has occupied. INHALE AND EXHALE BREATH HOLD CT The same technique for motion management needs to be applied during surgery. With slow CT there is no need for any motion management technique, I use the info on CT to adapt surgical plan and apply to the patient without any need to do everything. This other methods is more complex and require the collaboration of the patient. The idea is to suppress respiratory motion, take a deep breath and hold for the time the patient is able to do it (10-12 seconds). During the phase of breath hold we’ll acquire the CT image. This works only if the patient during di fferent maneuver will replicate at every breath hold maneuver the same anatomical internal con figuration, otherwise we’ll see artifacts (boundaries of acquisition on different breath hold maneuver). Training of the patient helps to improve the repeatability during the scanning. 4D-CT —> Respiratory correlated CT imaging There are 2 ways of doing it : prospective and retrospective 4D CT. In both cases we need a system that measure the respiratory activity of the patient, we need to measure a signal with the highest level of reliability the respiratory pattern of the patient. To acquire a surrogate of respiratory activity : -Spirometer -Diaphragm motion -Tracking of external surrogate (markers places conventionally) and localized by infrared tracking system Prospective 4D CT (respiratory gated 4D CT) —> Use the signal to trigger the image acquisition for a prede fined period of time. The system for prospective 4D CT will look at the signal, identify the instant at which the signal will enter a prede fined phase (end of exhalation so minimum sinusoidal amplitude), as soon as the signal enters in this gate the system will trigger the acquisition of the image on the CT, when the image exit this acquisition the system will stop the acquisition. So it’s a triggered acquisition as a function of speci fic respiratory phase in which the patient is. It is used end of exhalation because the repeatability of anatomical con figuration is higher, much more stable respect the other steps of respiratory cycle. The problem is that replicability among di fferent breathing act is not ensured. Entering in the end of exhalation the trigger let the scanning on. On the first acquisition we have acquired a set of slices (about 64), the couch is still during the acquisition, then we move it and let the acquisition continue. Retrospective 4D CT —> slices are continously acquired during the entire respiratory cycle and reordered o ff-line as a function of the respiratory phase “label” assigned to each of them; In this case we do not focus our attention on one single respiratory phase but we look continuously to the respiratory activity and do not take one as reference. We divide the respiratory cycle in a prede fined numbers of phases during inhalation and exhalation (respiratory gates). Tipically are 12 : one at end exhalation, one at end inhalation and then 4 and 4 then 5 and 5 intermediate respiratory phases. I acquire a conventional CT with the attention of slow down the couch much more than a conventional free breathing CT. That’s because every time one rotation of the X-ray tube is completed and finished and I have all the data to reconstruct a single slice I will look where the patient is and check the respiratory activity signal. I will identify the gate at which that rotation of tube and detector was performed at which that slice was generated, and I will label each slice with a certain respiratory phase. If the couch moves very slowly I have a continuous rotation in helical function and every time a rotation in complete I label the slice. At the end I obtain a very rich set of slices each labeled at a specify respiratory phases. Now the system apply the retrospective phase because the system needs to order (sorte) the acquired slices. I will obtain at the end a number of complete CT volume of my patients as the number of respiratory gates that I have de fined on respiratory breathing act. So there is a problem on the number of gates I de fine, higher is the number of gates higher is the radiation I need to deliver. - Ethically speaking I should reduce it but if I reduce it too much I will not sample properly and have residual errors. - The duration of the gate needs to last enough to let the rotation of the tube be completed (about 0.3 second). At every respiratory phase replicability is fundamental in 4DCT, I pretend that at all intermediate phases the patient among multiple breathing act will replicate the internal anatomy. Artifacts are present because replicability never really happen —> the capability of the monitory activity will be in trouble defying irregular breathing acts with a lot of variation in amplitude and periods. The best way to minimize the big artifacts is to ensure the regularity of the breathing activity of the patient, this will maximize the replicability and allow to identify properly the gate labeled in the speci fic slice. If one of the problem is the identi fication of the proper respiratory gate we can increase the robustness of the system is order to be more robust identifying the gate and to find irregularities. The commercial system used are for example Varian RPM (real time position management respiratory system). It is based in a unique camera equipped with infrared that localize a block place randomly by the operator. The resulting system used to monitor the respiratory activity is simply the elevation, so a mono dimensional signal describing the elevation of that block identi fied by the single camera system. We are sampling on one sampling station assuming that it is able to capture the whole kinematics during respiration (that not enough). For this reason we see a lot of artifacts. We can work on more robust surrogates and system in order to identify properly respiratory gates in presence of regular breathing and some irregularities. I can not use CT to obtain an internal anatomical surrogates for respiratory gates identi fication, I need to use another system (ultrasound system for example). We can choose not to work on one simple surrogates but have multiple surrogates placed in thoracic abdominal surface and have 2 camera that provide the localization in 3d of surrogates covering more area of the patient. When you increase complexity of the signal coming from not only one surrogates and being described in 3d dimension conveniently combined you can perform more robust analysis and increase the robustness. The artifacts, inaccuracy depends on what I need to do, on how much margin I have and how much error I am prone to accept and on the accuracy I want to see. It requires technical assessment of inaccuracy but also require the accuracy, we do not want to treat badly the patient, but we have to avoid to increase the error in a way that a macroscopic error will take place. MAGNETO RESONANCE IMAGE (MRI) In MR the patient is immersed in a static magnetic filed and undergoes a variable magnetic field generated by radio frequency which is absorbed by hydrogen nuclei (H+) contained in the tissues. It is used because it allow di fferentiation of soft tissue, in particular in central nervous system. The best moment in which MRI is used in preoperative phase before surgical planning. There are also way in which MRI can provide also functional information and reconstruct the sequences, we can have di ffusion and perfusion weighted imaging. It provide with a unique machine very detailed functional that is exactly what the surgeon needs. Also in radiation oncology MRI is fundamental not only for surgical planning but also as image guidance technique. There are also 4D MRI image acquisition, when we deal with mobile structure due to respiration. We have an external surrogate measured by an external system and used as a signal describer for the respiratory activity, this can be belts placed in the diaphragmatic region. MRI is not constrained to acquire slice by slice in a speci fic direction as it is the case for CT imaging, in this case you can switch very fastly from acquiring slices in a speci fic body area to acquire other slices even far away from the region of interest along di fferent planes acting on the gradients. With a navigator slice getting rid of external surrogate, and so of an external signal. Every now and then, in a certain time period, a speci fic slice is acquired and this fast 2D slice in centered in a position in such a way that it contains clearly the bidimensional information of an anatomical surrogate that moves a lot during respiration. With navigator slice we use an anatomical structure, which is very much in fluence by respiratory motion in order to extract directly from the image a signal describing the respiratory activity. Typically navigator slice is placed along the diaphragm or along lower lungs region, and with this technique every 10/20 millisecond, we will acquire this navigator slice that will be coupled with a set of slices acquired immediately before it, from the navigator slice we’ll the image that will be on 2D. We’ll label the slices we have acquired, in the anatomy of the patient in which we are interested, with the position of the diaphragm which will represent the respiratory signal in which the patient was at the time of the acquisition in the speci fic slices set. So no external signal but we use an anatomical surrogate to obtain the information we need. In this way the navigator slice 4D con be applied for long period of time, because MRI is non invasive, and it will provide a dynamic of the respiratory activity along multiple breathing acts so during di fferent cycle, capturing the variability of the patient. In this case the variability can be captured and is a relevant information related to the regularity of the activity. While with with 4D CT multiple breathing act are concentrated in a single virtual one, so all the variability that the patient can have during di fferent breathing act is lost, this cause problem of proper reordering. Whit MRI we can extend acquisition for minutes to provide variability to operators. There are also other technique for resorting (reordering the slices o ff line), without reoccurring to a navigator slice but it is based on similarities among di fferent slices that have been acquired, but the commercial one is based on navigator slice. MRI-LINAC Machine that produce high energy photon beam coupled with MRI scanner, so there is unique machine that is able to shoot and irradiate on the patient couple with the MRI. In the overall image guidance world (surgery or radiation oncology) this is really a revolutionary imaging guidance radiation oncology, because immediately before irradiate a patient you have to possibility to scan the patient’s detecting every deviation. In this way you can also compare the pre operative MRI of therapy planning to apply adaptive strategies for deviation and the intra operative, during the radiation. You can monitor in this way the intra fractional deviation, so the intra operative deviation, including the one of respiration, that might occur in the patient. •The first technique was using Cobalt60 —> high energy photon beam produced but very unhealthy in ecology because Cobalt60 isotopes physical sources was used which is inserted in the scanner of the MRI LINAC, it produce the beam and then we have the scanner to obtain the image. The isotopes X-rays productions doesn’t su ffer of the presence of the big static magnetic field of MRI so it was much more easy to couple an isotope based X-ray production respect a linear acceleration of the electrons that needs to collide to the target (Electrons do feel the presence of the magnetic field causing much higher technological problems) . •New machine using electric accelerator are now used. There exist intro operative scanner MRI application because they in fluence how the surgery is organized (everything have to be MRI compatible from the surgical room the anestesia). There exist example of application neurosurgery intervention but you count them in one hand ( intra op. PET Reference surgery that we have is the use of PET for in vivo dosimetry in radiation oncology, especially the ones that does not use the beams made up of high energy X-rays but makes use of radiation beams with charged particles (protons or typically carbon ions). It is spreading around the world very quickly because it has a lot of potential in a geometrical point of view, so it is possible to concentrate the dose much better when I’m using a proton beam respect an x-ray beam. Carbon Ions therapy is finding its way around the world for treating particular form of tumors, typically the radio resistant tumor (the ones which do not respond whit high energy X-rays), that if hit with big carbon ions particles are able to damage their DNA. In particle therapy you activate the patient, the fragmentation of carbon ions produce isotopes and the fragmentation of molecules biological matter produce atomic species and also some atoms that produce positrons. Among these Oxygen16 that has a life of some minutes so if you surround the patient with a ring of detector os the one of PET gantry you can acquire during the radiation a lot of 512 KeV, properly attenuated by the patient, arriving to the detector in synchrony and so during the therapy you can form a map of the attenuation of Oxygen16. And because Oxygen16 is not present naturally in the body you can understand in real time, in vivo, in beam, where you are delivering the beams. If you are able to measure a surrogate of the dosimetry (Oxygen16) you can compare the concentration of the map of oxygen 16 with a prediction of the distribution of it that you can can obtain from your treatment plan where you have any information you need. Obviously the more the irradiation last, the more the data are available for the PET scanner, the more detailed the map of oxygen16 inside the patient will be. ULTRASOUND IMAGING It measures the re flexion/refraction of an ultrasound beam crossing tissues with variable acoustic impedance. It is valid technology to use intra operatively in surgical room comparing it to the ones obtained in treatment planning (target localization in radiotherapy; brain shifts in neurosurgery), but never used in the last one. This is because of the lower image quality and spatial resolution, low penetration (20 cm), low S/R but it’s non invasive, cheap and handy even if require operator expertise. It is able to visualize soft tissue characterized by di fferent acoustic impedance. There is extension of ultrasound to be 3D to give anatomical structural information of soft tissue but always with low quality of image. The time is one of the first variable that is measured, it is the time between the emission of the ultrasound tools and the instants at which one or multiples echoes are received by the system. This time is converted in space knowing the speed of propagation of sound in biological matter which is assimilated to water, this is an assumption because actually the speed varies according to the di fferent tissue that are crossed by the ultrasounds pools. This is one of the main reason of poor S/N ratio. As we can see in the image part of ultrasound pools are reflected and reach detectors, so measuring the time between the firs emission and re flection ( first detection) and then the second coupled with the assumption we can obtain information. There exist di fferent modes like the A mode on the right where: -X axis there is the the signal exit with the time (time of receiving the echoes coming back) after being re flected; -Y axis intensity of these echoes. The time is not an anatomical time, we are not measuring anything dynamic. We’re just trying to build an anatomical map through a direct measurement of time then converted into space. In the B mode the intensity received by the system is converted in grey levels. -X axis we still have the time of receiving; -Y axis we have the intensity of pixels. In the M mode you have a dynamic representation in absolute time, in fact on Y axis there is the absolute time. So the M mode is represent at every line of the y axis so the di fferent of positions of pixels with di fferent brightness can be represented dynamically in time. The characteristic of the scan image geometry depends on the way the matrix of transducer is designed and incapsulated in ultrasound transducer. It depends also on how it is used if with a scanning so activating in di fferent times one single transducer or if we are using contemporarily all the transducers. It is typically 2D, but it can also be 3D. The 3D one is obtained by coupling the ultrasound probe with system capable od spacial localization that are the optical tracking localizer. So if you are able to track position and orientation of ultrasound probe by means of infrared optical localizer you are able to localize in space the position and orientation of the ultrasound slice on that you are currently imaging. This process can take place at real time, real time localization of ultrasound slice. This localization is referred to a unique and common reference (problem). So if I’m able to build a common reference frame between pre-operative frame and intra-operative phase in which I’m able to expression position of relevant anatomical points, contours, volumes, structures and I’m able to express the same points and contours that an expert operator is able to see on a ultrasound scan in real time track in 3D —> I’m able to see the correct coordinates in space in my surgical procedure. Application: Radiation oncology The reference case is the irradiation of the prostate. Now there is a gap between high energy radiation beam (new tech) with the reality of applying this radiation beam that is moving and deforming. As the example in the slide we can see movement both for the patient position and for the variation of speci fically anatomical structures (prostate), both rectum and bladder are very radiosensitive so you can not give a lot of radiation in the area. So when you irradiate the prostate you need to be very very precise at every radiation. Up to 4 mm of displacement of prostate in lateral rotation and on shift. There is a need to acquire information at time of treatment regarding the anatomical geometry of relevant structure involved in treatment (prostate itself, the rectum and the bladder). The bladder it can be seen in an echo graphic acquisition, it represent the wall of the bladder. The idea is to use an ultrasound technique in 3D , so you will always have a marker that are used in order to detect the position and orientation of the transducer being placed conveniently on the patient. In this way you can detect deviation and position of the anatomical structures. Some thresholds are established like when you need to stop the treatment, or move the patient that result in terms of applicability by a direct comparison between the information of preoperative CT and one being acquired at time of irradiation in a registered way ao expressing information to the same reference system. Robot holders are really useful because there is a lot of secondary radiation so no one can be present, but a robot keeping a proper pressure in the pelvis for the patient is really important to have a continuous acquisition to catch intra-fractional deviation. Pressing a lot of the pelvis can cause dislocation so the solution is to have the transducer applied with a certain pressure during the pre operative CT causing the same dislocation that the ultrasound probe will cause at time of irradiation replacing the transducer and reapplying the same pressure on the pelvis in order to have a good image. Application: Cranial neurosurgery This surgery su ffers of a problem related to motion a gain. When you acquire the preoperative MRI the skull is closed (intact patient) and see all the structures. But when the patient is on the surgery the skull is open, then the surgeon alo cuts the membranes (meningeal); from this two phenomena brain shift is caused. Normally is under pressure environment respect the atmospheric pressure, and it floats inside the skull in a liquid, when you open it you put it in the atmospheric pressure while when you cut the membranes the liquids is going away. The e ffects of this motion depends on the size of deformation you undergo. You can use system that describe the deviation of speci fic structures inside the brain with ultrasound imaging and identify them more easily respect the preoperative MRI and express information respect the same reference system and compare the two. In this way you are able to measure deviation, provide the surgeon with indication and a level of attention respect an imagine coming only from the treatment plan with a brain in a di fferent situation. Very often you will see ultrasound probes with markers attached and being used by the optical tracking system to identify position and orientation of the prove in order to have the 3 dimensionality information of the ultrasound system. And then you will have the chance to superimpose the grey level volume slice acquired with the MRI, thanks to this you can clearly see the boundary of the tumore and understand if it0s reached bu the surgeon that can stop there and be considered concluded the removal or if the brain shift effect have changed the positions and so the surgeon needs to understand this infos. DICOM standard One of the major problem as we saw is to compare information coming from di fferent imaging modalities, we need to have a common scenario. This is referred not only to geometrical scenario but also in terms of language, archiving, storage (saving) and communication of data (bio images). This is provided by the DICOM standard (Digital Imaging and Communication in Medicine) that de fine the standard for data communication among ICT system for image processing and storage. It is trying to catch up with the tech development, but it’s so fast the tech evolution that the DICOM standard has found hard times, anyway we’ll always find it. After the acquisition of the image in the pre operative phase the next step is to pass from the image world (grey level, peaks of cells, pixels) to the physical world in physical space (mm, cm cubic) in which we’ll de fine the surgical plan. The best way to do this is how the image are stored in the memory of your computer according to the DICOM standard. It is an object oriented architecture, so information from the real world (patient, images, name of the institution) are modeled as informative objects named Information Object. The operation I can do is to save the info in my computer, retrive this info in a display or sending it to another server. The speci fications of the operation allowed on each object are called DIMSE (DICOM message service elements) and they provide functionalities named Service Class like Storage, Print, Query/Retrive etc. The DICOM standard is made to create communication among system, one will be the one who send the info and provides the service (SCP (service class provider) and the other one will retrive it, using it (SCU (service class user)). There are two classes of DICOM object: -Composite objects, composed by di fferent level of information -Normalized objects DICOM standard uses unique identi fiers (UID, Unique IDenti fiers) for the identi fication of information object and related associated services. Every object (information) has a number and this is unique everywhere else in the world in every archiving and communication DICOM systems. So when you have to program a communication system you need to be stick to this standard so if you have to call that speci fic service, that’s speci fic information with that speci fic UID. Typically the objects that are manipulated in a communication system based on DICOM standard are the SOP (service-object pair) classes that are the fundamental units of the DICOM standard. An object is always related to a service, so an image (information object) can be retrieved, saved and archived, printed (that are the services). Very often object and service are considered as a unique within the DICOM standard. Every file that contain a DICOM object is structured. Header very simple composed of 128 bytes null, and an acronym (DICM) of 4 bytes that tells everybody that reads the file that this is the file contain the DICOM object. Then there is a sequence of saved data elements. Every data will be structured in this way: Every data has a tag (label) which is unique for every speci fic data, like patient age will have a speci fic tag that indicates that the following data read will be that one. There there will be the type of data , so if it’s a string, an integer etc. And then the multiplicity or the length so the amount of values for the attribute. If you want to be DICOM compliant you need to obey to this standards. Examples of data Headers ——> Image DICOM file structure : Every slice of a CT or MRI will be saved as a single separate file following the DICOM standard. And you can find a lot of relevant information following that speci fic slice among which relevant info to go from the image based world to the physical one. Image file consist of: The header which contains relevant tags such as patient data, examination data, and other administrative information. A matrix which is the values of the pixels of the speci fic image, numerical representation of the image. Each value of the image will have a grey level value in the speci fic pixel, which is the proton density. Examples of matrix dimension are MRI 256x256, nuclear medicine (PET) 128x128, CT 512x512. DICOM CT slice file will be composed of : -header (30kb) of acquisition data, patient data, image specs, reference system specs. -Image data (2Mb) of 512 row, 512 columns, 16 bit gray level coding (Hounds field unity). Opening the header we’ll have a lot if information, ever.y row has a speci fic tag for a speci fic component: - Dimension of the image (width, height, bit that can be used) - Patient name, ID etc - Slice thickness (in mm) (z dimension) that varies form di fferent acquisition, it is the operator that will established it, it’s already an information to pass to the physical world. - KVP is the Kilo Volt applied to to the X-Ray tube to obtain the image. - Data collection diameter related to reconstruction diameter. —> FOV of the image, the opaque ring in circle is the FOV of the scanner which is in this case 50 cm. - Pixel spacing (x, y dimension) is the physical dimension of pixel in axial plane, it is the physical planar plane occupied by each pixel. I can obtain it dividing for 512. 500/512 give 0.98 so passing to the physical world the physical dimension occupied by the pixel is given by that 0.98 mm along the horizontal and vertical dimension. -Image position patient (x, y coordinate of pixel) is made by two number. They are the coordinates in the physical world of the upper left corner of the image respect the center of it. It is really important to pass in the physical world, that the reference frame will be centered in the image, the physical reference system is center because the two number are equal. So it allows to place in real world the reference system of the scanner. -Slice location (z coordinate of pixel) is the portion in mm from the zero of examination, reference slice (set by the operator) which is the first or the last slice that is acquired, so the real origin of the scanner is 120 mm before in the example. It is a volumetric info that allow me to locate the slice in real world. In CT —> each slice is the representation of the densities (absorption / attenuation coe fficient) of 3D elements (voxel). Typical voxel dimension are 0.98x0.98x3 mm. Image positon [x y z]: [X, Y] = coordinates [mm] of the upper left corner with respect to axes origin. Es: X,Y = - 250.39 mm [Z] = Slice Location Pixel spacing [delta_X delta_Y ]: Physical size of a single pixel in the slice [mm] in X and Y direction Es: [0.98 0.98] mm Slice thickness give the Z component. Mapping from image to physical space: IMAGING WORKFLOW IN CAS/CIS Starting from the grey level, from the tomographic image available in MRI, we’ll arrive through a speci fic pathway and a speci fic set of stations to a 3D graphic representation of the patient speci fic model that we have built on which we de fine the surgical plan. So at the end we’ll have the patient speci fic model built and represented by means of surface based or volume (voxel) based informations. This object we’ll be provided to the surgeon by appropriate interaction so that he is able to take the measurement and explore the model, de fining the relevant information that will compose the surgical plan. We can do this trough two di fferent pathways after a pre-processing steps: 1)Be-dimensional pathway, will be the one in which we will work for the segmentation of anatomical structure, considering separately every single slice working in 2D. We’ll obtain a set con contours that we’ll connect them in order to come up to a surface based model. Here we’ll lose the volumetric information that is in a first instance provided by the grey level volume. For the sake of computational cost we can throw away this information. For example in neurosurgery the surgeon wants to know the boundaries and where is the tumor (physical con figuration), he is not interested in the proton density, it can also be empty in some case for the surgeon. 2)Three-dimensional pathway, we’ll segment in 3D the structure and keep the volumetric information inside e outside the anatomical structure we are interested into. For example in treatment planning for radiation oncology only the separation of structures is not enough, we need volumetric info according to speci fic parameter. Preprocessing of the image for a proper segmentation —> aim is to identify structures on images for building model. The segmentation of anatomical structure is still nowadays a process that introduce a lot of uncertainty (errors in the conturing), still is the major source of errors along the pre operative surgical planning phase. We need in first steps all the experts information in preparing the image allowing the work to make the best as possibile so the geometrical inaccuracy are minimized. This inaccuracies are of mm dimensions, but if are not correctly segmented they are bring long for the rest of the journey. Pre processing 2D: -geometric transformation -Selection of region of interest (ROI) -Resampling (resolution, quantization) Filtering 2D: -SNR enhancement, reduce noise and enhance contrast of interest structures for segmentation—> the basic process is a grey level reassignment to the pixels that are displayed on a speci fic image. -Reassign to every pixel v0 a value which is obtained from values of surrounding pixels -Average and median filtering -Filters based on image statics (Wiener) (2D) What we do to the filtering smoothing is just reweight and reassign the grey level to the pixel v0 according to values of the pixel that surround the one we are working on providing a new pixel value. The idea is to suppress Russian noise increasing the SNR, making the the average, median or di fferent choice available with the 8 pixels surrounding v0. Many times this strategy on one hand reduce the noise (smoothing) but on the other confuse the contours blurring them. But it depends on the use we need to do on the image, so if we are more interested int he boundaries or not. Segmentation means to identify an anatomical structure. It can be identi fied labeling pixel/voxel to a structure (f they belong or not to a structure). Or labeling (identifying) the pixel/voxel that belong to the boundaries. These are all classes of segmentation. The methods for segmentation can be: •Region based : they will label pixels that belong to the structure. They are based on criteria of homogeneity like all he pixel belonging to lungs are brighter, in sense of grey level, and so the method looks for clusters side by side to look for similar grey level, if a pixel is classi fied as darker will not belong to that cluster. So the output will be a structure described as occupied region. - thresholding - region growing - morphology filters (used for re finement) •Edge based : label the pixel belonging to contour of a speci fic structure, looking for discontinuity and rapid variation of grey level. - derivative operators - dynamic programming •Boundary based (isosurfacing): are a sort of extension of thresholding but in 3D. Will be applied in space and can segment the structure and build the surface of separation between this structure and the rest of anatomy in the same process. So the output will be a structure described by identifying the separation contour / surface from the rest of image. - cuberille model - marching cubes Thresholding : a grey level is selected as threshold (T). Pixels are classi fied as under/over threshold with binarization. is usually the first thing we do to get rid of what we do not work on, it’s a first raw operation. It’s easy to select a threshold to a speci fic category we want to see, isolating specifying structure according to a certain threshold. The most di fficult operation is to de fine a proper threshold depending on what we want to see. We can de fine more than one threshold or de fine an interval. We can also put a threshold as a function of the grey level histogram, for example make the choice exactly in a valley of the histogram. In a histogram in x axis we have the intensity of pixels (grey levels of pixels according to the bits we have available), on y axis we have the number of pixels that display that speci fic grey level. If we recognize a bimodal distribution by the histogram we can fix a threshold exactly in the deepest point of the valley, in this all the pixels below that threshold will belong to a speci fic anatomical structure present in a slice and over that one will belong to another one. While If we have a multimodal histogram the thresholds can be selected in correspondence of minima between local adjacent maxima. Region growing : classify pixel according to di fferent criteria of homogeneity (grey levels, colors/ texture) on the grey level. At the beginning it require the intervention of the operator that needs to put a seed and to select the first pixel manually de fined that for sure belong to the structure and by that moment the process goes automatic. If homogeneity criteria are met, pixels are included in the growing region. The criteria is really wide so it needs to be adapted to segmentation of speci fic structure. For example in bones we can not consider that we will find only very bright pixels but there are also darker pixels. There are then Inclusion criteria (only on intensity): - pixel intensity belonging to a speci fic interval - at each evaluation, the average of pixels intensity must belong to a speci fic interval - at each evaluation, the variance of pixels intensity must belong to a speci fic interval … Typically in all the segmentation methods there will be the a re finement, no one will completely trust a segmentation methods. Edge based : the criteria is that we are looking for diversity, rapid variation of grey level through an edge that group pixels which separate two di fferent region. They look for portion of image where this transition is seen. Ideally the intensity changes abruptly in correspondence of the edge as a step. Usually we have noise that is our worst enemy, the edge topology are ramp, noisy steps. If we want to see rapid transition what we want to see is derivative operators that search for the maxima of the first derivative or the zeros of the second derivative of the image function, so the idea is to apply operator that contains derivative. The idea is to make the convolution if the image with an appropriate mask, kernel that contain a derivative operator plus a noise suppression like the Sobel and Prewitt gradient estimator. After the application of the kernel in the resulting image you will have very bright pixel where rapid transition happen and other were it does not happen, so you will obliged to use thresholding operator to identify and map pixels. Example on horizontal (right) and vertical (left) direction. The methods available are a lot like: LoG (Laplacian of Gaussian) edge detector : 1. convolution of the image with the Laplacian (second derivative) of a gaussian kernel 2. search for zeros in the resulting matrix In general all the methods works but: - Noise sensitivity - Smoothing required - thresholding required after derivative filter application Morphology filters make use of kernels appropriately de fined for the requirement we need, making convolution between image and speci fic masks. They are applied after image binarization and are useful for erosion, dilatation, opening, closure, skeletonization. Interporlation Deformable contours/surfaces (snakes/baloon) works on the minimization of a cost functions (expressed in terms of grey level variation) interpreting the internal (elastic) energy of the contour and on the presence of attracting elements (discontinuity, edges) 1.contour is manually initialized, an operator fix manually points 2.contour is attracted by edges in the 2D image, area of rapid transition 3.contour is deformed accounting for speci fic mechanical features (elasticity, viscosity) initially attributed to the contour. These mechanical properties are sensitive to the grey level that can be found in proximity of the mechanical active contour. We are working in 2D (see initial map) now but if we do not take this decision so we do not abandon the 3D informations there are a lot of extension on pre processing and filtering I can do. The smoothing can be extend in 3D without problem, and reassigning the grey level of the voxel we are working on by averaging intensity values of 6 adjacent voxel with a speci fic operation (average, median, convolution with 3D kernels). The cost will be the same so blurred contours. Interpolation is used both in 1D (linear) and 3D. For example to find the grey level value of an intermediate point between 2 pixels, this will require a linear interpolation connecting these two. The easiest one will be the nearest neighborhood, so assign the new pixel we want to know the grey level value the one of the closest pixel. Very simple but very inaccurate. The linear interpolation will assign the grey level depending on the distance between two near point and on their grey level. The trilinear interpolation or spline one are more complex curve in 3D interpolation. Calculate the grey level value of the red point respect the surrounding voxels. The corner of the cube are the center of 8 sounding voxel with proper coordinate (normalized to 1). This is fundamental for complex anatomical model in fact in some case is embed in graphical work station hardware implemented. We can also extend the edge enhancer operator in 3D, we’ll not work with convolution operation but we’ll look for approximate gradient being present in the 3D grey level volume. We are able to calculate v0 in correspondence of a cluster of voxels an approximation of the gradient considering the grey level of 6 surrounding voxels. Useful when we want to calculate the grey level of contours. Boundary-based methods ( iso surfacing ) : They are characterized by an equality, they will look for the elements that posses a prede fined grey level value established by the operator. The algorithm, (intrinsically 3D) are directly applied to the gray-level volume, and will look for the element of the surface that is characterized by that speci fic grey level. They combine segmentation with boundary surfaces generation and representation (isosurfacing). - cuberille model : requires an initial gray-level volume binarization (a speci fic threshold T is de fined) or a criteria for classifying voxels as belonging or not belonging to a speci fic structure; - marching cubes : requires the initial de finition of a threshold T for voxel classi fication and exploits intensity data for boundary surface generationCuberille model Let’s consider we have a class of four voxels. We need to established a threshold considered relevant for the structure of interest. The algorithm can come to that situation where V1 is above threshold, so has a grey level above the one we put while V2 is below that one. By de finition we are looking at a surface that separate this two voxels, the algorithm needs to make a decision, in this case it’s easy but if there is an ambiguity we need to apply heuristic criteria. One of the most basic is to ensure the continuity of the direction of the inclusion of the elect in the surface. Not fully atomic method but that we can lunch and will go on automatically and guarantee final re finement. The problem of Cuberille is the special resolution of the surface we are creating, because if we implement the method in such a way that the element of surface if separation of boxes can be only the faces between two adjacent voxel the dimension of the element will the dimension of the voxel, so if the voxel is 1 mm we’ll have 1 mm square as surface of separation and this can be in some case too much to properly describe surface separation. Marching cubes It makes an extensive use of interpolation, imagine having 8 voxels that forms a cluster connected to form a cube each one of an intensity level assessed as a function of the prede fined threshold T. We need to de fine a threshold grey level also here, to classify clusters above or over it. Let’s suppose only v1 is above the threshold that belongs to the object and all the other are under. In this case we need to de fine an element of separation, we look along the line v1-v2 because there will be for sure a virtual point that will posses exactly the level of the threshold we put at the beginning. I find the position of this post with interpolation. I repeat this with all the points connected to v1 so v2 , v3 , v5 . The triangle that connect the three points is a geometrical element that is connecting three points with the same grey level values that contain virtual points equal to the threshold, so an element of the separating surface (look up table). So i need to store the voxels elements, their order of connection, who’s connected to who and the versor normal director cosines of the plane. Every time we need to store this information. Usually it’s not so simple with just one under threshold, we can have a lot (15) of possible cases with cluster of 8 voxel. We can have also ambiguity that makes things more complicated. With the same distribution of above and below threshold grey level values we can consider ambiguous surface like the red ones in the image. Ambiguity require manual re finement or euristic criteria, the best is to ensure the continuity of the surface, we do not want to see hole in the surface we are generating. Now we are here and there it’s time to decide if the 3D information are relevant for the iso surface based segmentation or not, so if the volumetric info are important for our approach or not. The possibilities are: -2D contours connection, we need to take adjacent slices and worry about how to link these two. We need to connect the bidimensional contour. -iso surfacing methods, they leads to a computational cost reduction, use of well known methods in computer graphics for rendering but there is a drop of volumetric information coming from imaging studies (intensity levels in a CT volume) and we’ll lose volumetric information. Contours Connection We apply the so called method “Tiling”. Let’s suppose we have two contours: P made up by speci fic sequence of pixels labeled as belonging to the contour of the speci fic anatomical structure, and the contour P lies on one slice. The contour Q is of the same anatomical structure made up by the same amount of pixel being classi fied as belonging to the contour separation, this Q lies on an adiacent slice respect P. How can we connect in a clever way the pixels belonging to P and to Q in order to give 3 dimensionality to the structure? We need to establish some rules: 1. All vertices lie on P and Q, filling up the border. 2. If Q j-P i is a side of a triangle, than Q jPiPi+1 or Q jPiQj+1 is the adjacent triangle in the tiling process. In this way it is establish continuity. 3.Each triangular element has at least one vertex on P and one on Q According to this rules if we report on a table the 6 pixel constituting the contour P and the same for the 7 of Q, if we move in this table connecting from the top left to the bottom right, this connection identify path way that respect the 3 rules of before. All the constraint are satis fied by the table going in the two corner. We need to ensure a heuristic criteria in order to ensure that the tiling is performed accurately, so the special layer is formed in a clever way. Because for example connecting the pixel number 5 of the contour Q to all the pixels of the the contour P is legal to the rules but it’s not clever because it give rises to connections that is not representing well the layer. This criteria can