981 resultados para Digital computer simulation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

ENGLISH: Three distinct versions of TUNP0P, an age-structured computer simulation model of the eastern Pacific yellowfin tuna, Thunnus albacores, stock and surface tuna fishery, are used to reveal mechanisms which appear to have a significant effect on the fishery dynamics. Real data on this fishery are used to make deductions on the distribution of the fish and to show how that distribution might influence events in the fishery. The most important result of the paper is that the concept of the eastern Pacific yellowfin tuna stock as a homogeneous unit is inadequate to represent the recent history of the fishery. Inferences are made on the size and distribution of the underlying stock as well as its potential yield to the surface fishery as a result of alterations in the level and distribution of the effort. SPANISH: Se han empleado tres versiones diferentes de TUNP0P, un modelo de simulación de la computadora (basado en la estructura de la edad) de la población y la pesca epipelágica del atún aleta amarilla, Tbunnus albacares, del Pacífico oriental, para revelar los mecanismos que parecen tener un efecto importante en la dinámica pesquera. Se emplean los datos verdaderos de esta pesca para hacer deducciones sobre la distribución de los peces y para mostrar cómo puede influir esta distribución en los eventos de pesca. La conclusión más importante de este estudio es que el concepto de que la población del aleta amarilla del Pacífico oriental es una unidad homogénea, es inadecuado para representar la historia reciente de pesca. Se teoriza sobre la talla y distribución de la población subyacente como también sobre su producción potencial en la pesca epipelágica al cambiar el nivel y distribución del esfuerzo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The imaging technology of stimulated emission depletion (STED) utilizes the nonlinearity relationship between the fluorescence saturation and the excited state stimulated depletion. It implements three-dimensional (3D) imaging and breaks the diffraction barrier of far-field light microscopy by restricting fluorescent molecules at a sub-diffraction spot. In order to improve the resolution which attained by this technology, the computer simulation on temporal behavior of population probabilities of the sample was made in this paper, and the optimized parameters such as intensity, duration and delay time of the STED pulse were given.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on the Coulomb friction model, the frictional motion model of workpiece relating to the polishing pad was presented in annular polishing. By the dynamic analysis software, the model was simulated and analysed. The conclusions from the results were that the workpiece did not rotate steadily. When the angular velocity of ring and the direction were the same as that of the polishing pad, the angular velocity of workpiece hoicked at the beginning and at the later stage were the same as that of the polishing pad before contacting with the ring. The angular velocity of workpiece vibrated at the moment of contacting with the ring. After that the angular velocity of workpiece increased gradually and fluctuated at a given value, while the angular velocity of ring decreased gradually and also fluctuated at a given value. Since the contact between the workpiece and the ring was linear, their linear velocities and directions should be the same. But the angular velocity of workpiece was larger than that of the polishing pad on the condition that the radius of the workpiece was less than that of the ring. This did not agree with the pure translation principle and the workpiece surface could not be flat, either. Consequently, it needed to be controlled with the angular velocity of ring and the radii of the ring and the workpiece, besides friction to make the angular velocity of workpiece equal to that of the polishing pad for obtaining fine surface flatness of the workpiece. Copyright © 2007 Inderscience Enterprises Ltd.}

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O reservatório do Lobo, localizado no estado de São Paulo, é um sistema dinâmico no qual se desenvolve um ciclo diurno de estratificação e mistura, de modo similar ao que tem sido observado em outros lagos tropicais. Utilizou-se simulação 3D computacional com os softwares ELCOM (Estuary and Lake Computer Model) acoplado ao CAEDYM (Computacional Aquatic Ecosystem Dynamics Model), ambos desenvolvidos pelo CWR (Center for Water Research) da Universidade da Austrália. Foram realizadas cinco simulações: Piloto Primavera baseada em dados reais da estação no ano primavera no reservatório para o ano de 2007; Primavera-P em que as concentrações de fósforo total, fosfato inorgânico e fosfato total dissolvido foram aumentadas em 100% no reservatório (coluna de água e sedimento) e nos rios tributários; Primavera-V na qual a intensidade dos ventos foi aumentada em 50%; Primavera-T onde a temperatura da água (reservatório e tributários) e do ar foram aumentadas em 10C e, Primavera-X, onde a temperatura da água (reservatório e tributários) e do ar sofreu aumento em 10C, as concentrações de fósforo total, fosfato inorgânico e fosfato total dissolvido foram aumentadas em 100% e a velocidade do vento aumentada em 50%. A concentração de clorofila a foi representada pelos grupos cianobactérias e clorofíceas. O espaço de tempo das simulações representou 90 dias. As clorofíceas apresentaram maior desenvolvimento populacional do que as cianobactérias em todas as simulações. No reservatório, a mistura vertical é ocasionada diariamente pelo vento ou por processos convectivos causados pela perda de calor no corpo de água. A oxigenação do reservatório é maior com a ocorrência de ventos e de grupos fotossintéticos. As concentrações totais de fósforo e nitrogênio apresentaram aumento em todas as simulações.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose the analog-digital quantum simulation of the quantum Rabi and Dicke models using circuit quantum electrodynamics (QED). We find that all physical regimes, in particular those which are impossible to realize in typical cavity QED setups, can be simulated via unitary decomposition into digital steps. Furthermore, we show the emergence of the Dirac equation dynamics from the quantum Rabi model when the mode frequency vanishes. Finally, we analyze the feasibility of this proposal under realistic superconducting circuit scenarios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A modelagem matemática e computacional é uma ferramenta que tem sido bastante utilizada no campo da Biologia e das Ciências Biomédicas. Nos dias de hoje, uma quantidade significativa de dados experimentais nessa área pode ser encontrada na literatura, tornando possível o desenvolvimento de modelos que combinem a experimentação e hipóteses teóricas. O objetivo do presente projeto é implementar um modelo matemático de transmissão sináptica conectando neurônios em um circuito de descargas repetitivas ou reverberativo, a fim de investigar o seu comportamento diante de variações paramétricas. Através de simulações computacionais, utilizando um programa desenvolvido em linguagem C++, pretende-se utilizá-lo para simular um circuito de memória imediata. Afora o considerável avanço da Neurofisiologia e Neurociência computacional no sentido do entendimento das características fisiológicas e comportamentais das habilidades do Sistema Nervoso Central, muitos mecanismos neuronais ainda permanecem completamente obscuros. Ainda não se conhece definitivamente o mecanismo pelo qual o cérebro adquire, armazena e evoca as informações. Porém, o postulado de Hebb referente às redes reverberantes, onde a idéia de que redes de reverberação facilitariam a associação de dados coincidentes entre informações sensoriais, temporalmente divergentes, tem sido aceito para explicar a formação de memória imediata (Johnson et al., 2009). Assim, com base no postulado de Hebb, os resultados observados no modelo neuromatemático-computacional adotado possuem características de um circuito de memória imediata.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Através do processamento de imagens digitais, mais especificamente as etapas de segmentação e classificação, foi possível analisar o processo de ocupação humana da bacia hidrográfica do rio Bonfim, localizada no município de Petrópolis, no estado do Rio de Janeiro. Este processo possibilitou a geração de mapas de uso da terra e cobertura vegetal e configurou-se numa importante etapa para avaliação ambiental capaz de auxiliar e dar fomento à execução de atividades de gestão e monitoramento do meio ambiente e de análise histórica dos remanescentes florestais ao longo dos últimos anos. Nesta pesquisa foram adotadas classes temáticas com o propósito de permitir a classificação das imagens digitais na escala 1/40.000. As classes adotadas foram: afloramento rochoso e vegetação rupestre; obras e edificações; áreas agrícolas e vegetação. Estudos foram feitos no sentido de indicar o melhor método de classificação. Primeiramente, efetuou-se a classificação no sistema SPRING, testando-se os melhores parâmetros de similaridade e área na detecção de fragmentos, somente da classe vegetação. Houve tentativa de classificar as demais classes de uso diretamente pelo sistema SPRING, mas esta classificação não foi viável por apresentar conflitos em relação às classes, desta forma, neste sistema foi feita somente a classificação e quantificação da classe vegetação. Visando dar continuidade a pesquisa, optou-se por executar uma interpretação visual, através do sistema ArcGis, para todas as classes de uso do solo, possibilitando o mapeamento da dinâmica de evolução humana, diante da floresta de mata atlântica na área de estudos e análise histórica de seus remanescentes entre os anos dos anos 1965, 1975, 1994 e 2006.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Em 1828 foi observado um fenômeno no microscópio em que se visualizava minúsculos grãos de pólen mergulhados em um líquido em repouso que mexiam-se de forma aleatória, desenhando um movimento desordenado. A questão era compreender este movimento. Após cerca de 80 anos, Einstein (1905) desenvolveu uma formulação matemática para explicar este fenômeno, tratado por movimento Browniano, teoria cada vez mais desenvolvida em muitas das áreas do conhecimento, inclusive recentemente em modelagem computacional. Objetiva-se pontuar os pressupostos básicos inerentes ao passeio aleatório simples considerando experimentos com e sem problema de valor de contorno para melhor compreensão ao no uso de algoritmos aplicados a problemas computacionais. Foram explicitadas as ferramentas necessárias para aplicação de modelos de simulação do passeio aleatório simples nas três primeiras dimensões do espaço. O interesse foi direcionado tanto para o passeio aleatório simples como para possíveis aplicações para o problema da ruína do jogador e a disseminação de vírus em rede de computadores. Foram desenvolvidos algoritmos do passeio aleatório simples unidimensional sem e com o problema do valor de contorno na plataforma R. Similarmente, implementados para os espaços bidimensionais e tridimensionais,possibilitando futuras aplicações para o problema da disseminação de vírus em rede de computadores e como motivação ao estudo da Equação do Calor, embora necessita um maior embasamento em conceitos da Física e Probabilidade para dar continuidade a tal aplicação.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We show how machine learning techniques based on Bayesian inference can be used to reach new levels of realism in the computer simulation of molecular materials, focusing here on water. We train our machine-learning algorithm using accurate, correlated quantum chemistry, and predict energies and forces in molecular aggregates ranging from clusters to solid and liquid phases. The widely used electronic-structure methods based on density-functional theory (DFT) give poor accuracy for molecular materials like water, and we show how our techniques can be used to generate systematically improvable corrections to DFT. The resulting corrected DFT scheme gives remarkably accurate predictions for the relative energies of small water clusters and of different ice structures, and greatly improves the description of the structure and dynamics of liquid water.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

AMPS simulator, which was developed by Pennsylvania State University, has been used to simulate photovoltaic performances of nc-Si:H/c-Si solar cells. It is shown that interface states are essential factors prominently influencing open circuit voltages (V-OC) and fill factors (FF) of these structured solar cells. Short circuit current density (J(SC)) or spectral response seems more sensitive to the thickness of intrinsic a-Si:H buffer layers inserted into n(+)-nc-Si:H layer and p-c-Si substrates. Impacts of bandgap offset on solar cell performances have also been analyzed. As DeltaE(C) increases, degradation of VOC and FF owing to interface states are dramatically recovered. This implies that the interface state cannot merely be regarded as carrier recombination centres, and impacts of interfacial layer on devices need further investigation. Theoretical maximum efficiency of up to 31.17% (AM1.5,100mW/cm(2), 0.40-1.1mum) has been obtained with BSF structure, idealized light-trapping effect(R-F=0, R-B=1) and no interface states.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Crystal formation process of charged colloidal particles is investigated using Brownian dynamics (BD) simulations. The particles are assumed to interact with the pair-additive repulsive Yukawa potential. The time evolution of crystallization process and the crystal structure during the simulation are characterized by means of the radial distribution functions (RDF) and mean square displacement (MSD). The simulations show that when the interaction is featured with long-range, particles can spontaneously assemble into body-centered-cubic (BCC) arrays at relatively low particle number density. When the interaction is short-ranged, with increasing the number density particles become trapped into a stagnant disordered configuration before the crystallization could be actualized. The simulations further show that as long as the trapped configurations are bypassed, the face-centered-cubic (FCC) structures can be achieved and are actually more stable than BCC structures. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effect of the hydrophobic properties of blocks B and C on the aggregate morphologies formed by ABC linear triblock copolymers in selective solvent was studied through the self-consistent field theory. Five typical micelles, such as core-shell-corona, hamburger-like, segmented-wormlike, were obtained by changing the hydrophobic properties of blocks B and C. The simulation results indicate that the shape and size of micelle are basically controlled by the hydrophobic degree of the middle block B, whereas the type of micelle is mainly determined by the hydrophobic degree of the end block C.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The ability to write clearly and effectively is of central importance to the scientific enterprise. Encouraged by the success of simulation environments in other biomedical sciences, we developed WriteSim TCExam, an open-source, Web-based, textual simulation environment for teaching effective writing techniques to novice researchers. We shortlisted and modified an existing open source application - TCExam to serve as a textual simulation environment. After testing usability internally in our team, we conducted formal field usability studies with novice researchers. These were followed by formal surveys with researchers fitting the role of administrators and users (novice researchers) RESULTS: The development process was guided by feedback from usability tests within our research team. Online surveys and formal studies, involving members of the Research on Research group and selected novice researchers, show that the application is user-friendly. Additionally it has been used to train 25 novice researchers in scientific writing to date and has generated encouraging results. CONCLUSION: WriteSim TCExam is the first Web-based, open-source textual simulation environment designed to complement traditional scientific writing instruction. While initial reviews by students and educators have been positive, a formal study is needed to measure its benefits in comparison to standard instructional methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

On-board image guidance, such as cone-beam CT (CBCT) and kV/MV 2D imaging, is essential in many radiation therapy procedures, such as intensity modulated radiotherapy (IMRT) and stereotactic body radiation therapy (SBRT). These imaging techniques provide predominantly anatomical information for treatment planning and target localization. Recently, studies have shown that treatment planning based on functional and molecular information about the tumor and surrounding tissue could potentially improve the effectiveness of radiation therapy. However, current on-board imaging systems are limited in their functional and molecular imaging capability. Single Photon Emission Computed Tomography (SPECT) is a candidate to achieve on-board functional and molecular imaging. Traditional SPECT systems typically take 20 minutes or more for a scan, which is too long for on-board imaging. A robotic multi-pinhole SPECT system was proposed in this dissertation to provide shorter imaging time by using a robotic arm to maneuver the multi-pinhole SPECT system around the patient in position for radiation therapy.

A 49-pinhole collimated SPECT detector and its shielding were designed and simulated in this work using the computer-aided design (CAD) software. The trajectories of robotic arm about the patient, treatment table and gantry in the radiation therapy room and several detector assemblies such as parallel holes, single pinhole and 49 pinholes collimated detector were investigated. The rail mounted system was designed to enable a full range of detector positions and orientations to various crucial treatment sites including head and torso, while avoiding collision with linear accelerator (LINAC), patient table and patient.

An alignment method was developed in this work to calibrate the on-board robotic SPECT to the LINAC coordinate frame and to the coordinate frames of other on-board imaging systems such as CBCT. This alignment method utilizes line sources and one pinhole projection of these line sources. The model consists of multiple alignment parameters which maps line sources in 3-dimensional (3D) space to their 2-dimensional (2D) projections on the SPECT detector. Computer-simulation studies and experimental evaluations were performed as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise and acquisition geometry. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, the six alignment parameters (3 translational and 3 rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by Radon transform, the estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution and detector acquisition geometry. The estimation accuracy was significantly improved by using 4 line sources rather than 3 and also by using thinner line-source projections (obtained by better intrinsic detector resolution). With 5 line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.

Simulation studies were performed to investigate the improvement of imaging sensitivity and accuracy of hot sphere localization for breast imaging of patients in prone position. A 3D XCAT phantom was simulated in the prone position with nine hot spheres of 10 mm diameter added in the left breast. A no-treatment-table case and two commercial prone breast boards, 7 and 24 cm thick, were simulated. Different pinhole focal lengths were assessed for root-mean-square-error (RMSE). The pinhole focal lengths resulting in the lowest RMSE values were 12 cm, 18 cm and 21 cm for no table, thin board, and thick board, respectively. In both no table and thin board cases, all 9 hot spheres were easily visualized above background with 4-minute scans utilizing the 49-pinhole SPECT system while seven of nine hot spheres were visible with the thick board. In comparison with parallel-hole system, our 49-pinhole system shows reduction in noise and bias under these simulation cases. These results correspond to smaller radii of rotation for no-table case and thinner prone board. Similarly, localization accuracy with the 49-pinhole system was significantly better than with the parallel-hole system for both the thin and thick prone boards. Median localization errors for the 49-pinhole system with the thin board were less than 3 mm for 5 of 9 hot spheres, and less than 6 mm for the other 4 hot spheres. Median localization errors of 49-pinhole system with the thick board were less than 4 mm for 5 of 9 hot spheres, and less than 8 mm for the other 4 hot spheres.

Besides prone breast imaging, respiratory-gated region-of-interest (ROI) imaging of lung tumor was also investigated. A simulation study was conducted on the potential of multi-pinhole, region-of-interest (ROI) SPECT to alleviate noise effects associated with respiratory-gated SPECT imaging of the thorax. Two 4D XCAT digital phantoms were constructed, with either a 10 mm or 20 mm diameter tumor added in the right lung. The maximum diaphragm motion was 2 cm (for 10 mm tumor) or 4 cm (for 20 mm tumor) in superior-inferior direction and 1.2 cm in anterior-posterior direction. Projections were simulated with a 4-minute acquisition time (40 seconds per each of 6 gates) using either the ROI SPECT system (49-pinhole) or reference single and dual conventional broad cross-section, parallel-hole collimated SPECT. The SPECT images were reconstructed using OSEM with up to 6 iterations. Images were evaluated as a function of gate by profiles, noise versus bias curves, and a numerical observer performing a forced-choice localization task. Even for the 20 mm tumor, the 49-pinhole imaging ROI was found sufficient to encompass fully usual clinical ranges of diaphragm motion. Averaged over the 6 gates, noise at iteration 6 of 49-pinhole ROI imaging (10.9 µCi/ml) was approximately comparable to noise at iteration 2 of the two dual and single parallel-hole, broad cross-section systems (12.4 µCi/ml and 13.8 µCi/ml, respectively). Corresponding biases were much lower for the 49-pinhole ROI system (3.8 µCi/ml), versus 6.2 µCi/ml and 6.5 µCi/ml for the dual and single parallel-hole systems, respectively. Median localization errors averaged over 6 gates, for the 10 mm and 20 mm tumors respectively, were 1.6 mm and 0.5 mm using the ROI imaging system and 6.6 mm and 2.3 mm using the dual parallel-hole, broad cross-section system. The results demonstrate substantially improved imaging via ROI methods. One important application may be gated imaging of patients in position for radiation therapy.

A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150-L110 robot). An imaging study was performed with a phantom (PET CT PhantomTM), which includes 5 spheres of 10, 13, 17, 22 and 28 mm in diameter. The phantom was placed on a flat-top couch. SPECT projections were acquired with a parallel-hole collimator and a single-pinhole collimator both without background in the phantom, and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180 degrees and 228 degrees respectively. The pinhole detector viewed a 14.7 cm-diameter common volume which encompassed the 28 mm and 22 mm spheres. The common volume for parallel-hole was a 20.8-cm-diameter cylinder which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the flat-top table while avoiding collision with the table and maintaining the closest possible proximity to the common volume. For image reconstruction, detector trajectories were described by radius-of-rotation and detector rotation angle θ. These reconstruction parameters were obtained from the robot base and tool coordinates. The robotic SPECT system was able to maneuver the parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector to center-of-rotation (COR) distance. In no background case, all five spheres were visible in the reconstructed parallel-hole and pinhole images. In with background case, three spheres of 17, 22 and 28 mm diameter were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameter) were readily observed in the pinhole ROI imaging.

In conclusion, the proposed on-board robotic SPECT can be aligned to LINAC/CBCT with a single pinhole projection of the line-source phantom. Alignment parameters can be estimated using one pinhole projection of line sources. This alignment method may be important for multi-pinhole SPECT, where relative pinhole alignment may vary during rotation. For single pinhole and multi-pinhole SPECT imaging onboard radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. In simulation studies of prone breast imaging and respiratory-gated lung imaging, the 49-pinhole detector showed better tumor contrast recovery and localization in a 4-minute scan compared to parallel-hole detector. On-board SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.