873 resultados para Beam complexity
Resumo:
PURPOSE Images from computed tomography (CT), combined with navigation systems, improve the outcomes of local thermal therapies that are dependent on accurate probe placement. Although the usage of CT is desired, its availability for time-consuming radiological interventions is limited. Alternatively, three-dimensional images from C-arm cone-beam CT (CBCT) can be used. The goal of this study was to evaluate the accuracy of navigated CBCT-guided needle punctures, controlled with CT scans. METHODS Five series of five navigated punctures were performed on a nonrigid phantom using a liver specific navigation system and CBCT volumetric dataset for planning and navigation. To mimic targets, five titanium screws were fixed to the phantom. Target positioning accuracy (TPECBCT) was computed from control CT scans and divided into lateral and longitudinal components. Additionally, CBCT-CT guidance accuracy was deducted by performing CBCT-to-CT image coregistration and measuring TPECBCT-CT from fused datasets. Image coregistration was evaluated using fiducial registration error (FRECBCT-CT) and target registration error (TRECBCT-CT). RESULTS Positioning accuracies in lateral directions pertaining to CBCT (TPECBCT = 2.1 ± 1.0 mm) were found to be better to those achieved from previous study using CT (TPECT = 2.3 ± 1.3 mm). Image coregistration error was 0.3 ± 0.1 mm, resulting in an average TRE of 2.1 ± 0.7 mm (N = 5 targets) and average Euclidean TPECBCT-CT of 3.1 ± 1.3 mm. CONCLUSIONS Stereotactic needle punctures might be planned and performed on volumetric CBCT images and controlled with multidetector CT with positioning accuracy higher or similar to those performed using CT scanners.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Myxobacteria are single-celled, but social, eubacterial predators. Upon starvation they build multicellular fruiting bodies using a developmental program that progressively changes the pattern of cell movement and the repertoire of genes expressed. Development terminates with spore differentiation and is coordinated by both diffusible and cell-bound signals. The growth and development of Myxococcus xanthus is regulated by the integration of multiple signals from outside the cells with physiological signals from within. A collection of M. xanthus cells behaves, in many respects, like a multicellular organism. For these reasons M. xanthus offers unparalleled access to a regulatory network that controls development and that organizes cell movement on surfaces. The genome of M. xanthus is large (9.14 Mb), considerably larger than the other sequenced delta-proteobacteria. We suggest that gene duplication and divergence were major contributors to genomic expansion from its progenitor. More than 1,500 duplications specific to the myxobacterial lineage were identified, representing >15% of the total genes. Genes were not duplicated at random; rather, genes for cell-cell signaling, small molecule sensing, and integrative transcription control were amplified selectively. Families of genes encoding the production of secondary metabolites are overrepresented in the genome but may have been received by horizontal gene transfer and are likely to be important for predation.
Resumo:
A first result of the search for ν ( )μ( ) → ν ( )e( ) oscillations in the OPERA experiment, located at the Gran Sasso Underground Laboratory, is presented. The experiment looked for the appearance of ν ( )e( ) in the CNGS neutrino beam using the data collected in 2008 and 2009. Data are compatible with the non-oscillation hypothesis in the three-flavour mixing model. A further analysis of the same data constrains the non-standard oscillation parameters θ (new) and suggested by the LSND and MiniBooNE experiments. For large values (>0.1 eV(2)), the OPERA 90% C.L. upper limit on sin(2)(2θ (new)) based on a Bayesian statistical method reaches the value 7.2 × 10(−3).
Resumo:
The T2K collaboration reports a precision measurement of muon neutrino disappearance with an off-axis neutrino beam with a peak energy of 0.6 GeV. Near detector measurements are used to constrain the neutrino flux and cross section parameters. The Super-Kamiokande far detector, which is 295 km downstream of the neutrino production target, collected data corresponding to 3.01×1020 protons on target. In the absence of neutrino oscillations, 205±17 (syst.) events are expected to be detected and only 58 muon neutrino event candidates are observed. A fit to the neutrino rate and energy spectrum assuming three neutrino flavors, normal mass hierarchy and θ23≤π/4 yields a best-fit mixing angle sin2(2θ23)=1.000 and mass splitting |Δm232|=2.44×10−3 eV2/c4. If θ23≥π/4 is assumed, the best-fit mixing angle changes to sin2(2θ23)=0.999 and the mass splitting remains unchanged.
Resumo:
The OPERA neutrino experiment is designed to perform the first observation of neutrino oscillations in direct appearance mode in the νμ→ντ channel, via the detection of the τ-leptons created in charged current ντ interactions. The detector, located in the underground Gran Sasso Laboratory, consists of an emulsion/lead target with an average mass of about 1.2 kt, complemented by electronic detectors. It is exposed to the CERN Neutrinos to Gran Sasso beam, with a baseline of 730 km and a mean energy of 17 GeV. The observation of the first ντ candidate event and the analysis of the 2008-2009 neutrino sample have been reported in previous publications. This work describes substantial improvements in the analysis and in the evaluation of the detection efficiencies and backgrounds using new simulation tools. The analysis is extended to a sub-sample of 2010 and 2011 data, resulting from an electronic detector-based pre-selection, in which an additional ντ candidate has been observed. The significance of the two events in terms of a νμ→ντ oscillation signal is of 2.40 σ.
Resumo:
The T2K experiment has observed electron neutrino appearance in a muon neutrino beam produced 295 km from the Super-Kamiokande detector with a peak energy of 0.6 GeV. A total of 28 electron neutrino events were detected with an energy distribution consistent with an appearance signal, corresponding to a significance of 7.3σ when compared to 4.92 ± 0.55 expected background events. In the PMNS mixing model, the electron neutrino appearance signal depends on several parameters including three mixing angles θ12, θ23, θ13, a mass difference Δm232 and a CP violating phase δCP. In this neutrino oscillation scenario, assuming |Δm232|=2.4×10−3 eV2, sin2θ23=0.5, δCP=0, and Δm232>0 (Δm232<0), a best-fit value of sin22θ13 = 0.140+0.038−0.032 (0.170+0.045−0.037) is obtained.
Resumo:
A compact adjustable focusing system for a 2 MeV H- RFQ Linac is designed, constructed and tested based on four permanent magnet quadrupoles (PMQ). A PMQ model is realised using finite element simulations, providing an integrated field gradient of 2.35 T with a maximal field gradient of 57 T/m. A prototype is constructed and the magnetic field is measured, demonstrating good agreement with the simulation. Particle track simulations provide initial values for the quadrupole positions. Accordingly, four PMQs are constructed and assembled on the beam line, their positions are then tuned to obtain a minimal beam spot size of (1.2 x 2.2) mm^2 on target. This paper describes an adjustable PMQ beam line for an external ion beam. The novel compact design based on commercially available NdFeB magnets allows high flexibility for ion beam applications.
Resumo:
The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements have been incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. The source of the latter inaccuracy is believed primarily due to assumptions made in the pencil beam's modeling of the complex phantom or patient geometry.^ A pencil-beam redefinition model was developed for the calculation of electron beam dose distributions in three dimensions. The primary aim of this redefinition model was to solve the dosimetry problem presented by deep inhomogeneities, which was the major deficiency of the enhanced version of the MDAH pencil-beam algorithm. The pencil-beam redefinition model is based on the theory of electron transport by redefining the pencil beams at each layer of the medium. The unique approach of this model is that all the physical parameters of a given pencil beam are characterized for multiple energy bins. Comparisons of the calculated dose distributions with measured dose distributions for a homogeneous water phantom and for phantoms with deep inhomogeneities have been made. From these results it is concluded that the redefinition algorithm is superior to the conventional, fluence-based, pencil-beam algorithm, especially in predicting the dose distribution downstream of a local inhomogeneity. The accuracy of this algorithm appears sufficient for clinical use, and the algorithm is structured for future expansion of the physical model if required for site specific treatment planning problems. ^
Resumo:
A three-dimensional model has been proposed that uses Monte Carlo and fast Fourier transform convolution techniques to calculate the dose distribution from a fast neutron beam. This method transports scattered neutrons and photons in the forward, lateral, and backward directions and protons, electrons, and positrons in the forward and lateral directions by convolving energy spread kernels with initial interaction available energy distributions. The primary neutron and photon spectrums have been derived from narrow beam attenuation measurements. The positions and strengths of the effective primary neutron, scattered neutron, and photon sources have been derived from dual ion chamber measurements. The size of the effective primary neutron source has been measured using a copper activation technique. Heterogeneous tissue calculations require a weighted sum of two convolutions for each component since the kernels must be invariant for FFT convolution. Comparisons between calculations and measurements were performed for several water and heterogeneous phantom geometries. ^
Resumo:
The aim of this study was to assess the potential of monoenergetic computed tomography (CT) images to reduce beam hardening artifacts in comparison to standard CT images of dental restoration on dental post-mortem CT (PMCT). Thirty human decedents (15 male, 58 ± 22 years) with dental restorations were examined using standard single-energy CT (SECT) and dual-energy CT (DECT). DECT data were used to generate monoenergetic CT images, reflecting the X-ray attenuation at energy levels of 64, 69, 88 keV, and at an individually adjusted optimal energy level called OPTkeV. Artifact reduction and image quality of SECT and monoenergetic CT were assessed objectively and subjectively by two blinded readers. Subjectively, beam artifacts decreased visibly in 28/30 cases after monoenergetic CT reconstruction. Inter- and intra-reader agreement was good (k = 0.72, and k = 0.73 respectively). Beam hardening artifacts decreased significantly with increasing monoenergies (repeated-measures ANOVA p < 0.001). Artifact reduction was greatest on monoenergetic CT images at OPTkeV. Mean OPTkeV was 108 ± 17 keV. OPTkeV yielded the lowest difference between CT numbers of streak artifacts and reference tissues (-163 HU). Monoenergetic CT reconstructions significantly reduce beam hardening artifacts from dental restorations and improve image quality of post-mortem dental CT.
Resumo:
The relationship between time in dreams and real time has intrigued scientists for centuries. The question if actions in dreams take the same time as in wakefulness can be tested by using lucid dreams where the dreamer is able to mark time intervals with prearranged eye movements that can be objectively identified in EOG recordings. Previous research showed an equivalence of time for counting in lucid dreams and in wakefulness (LaBerge, 1985; Erlacher and Schredl, 2004), but Erlacher and Schredl (2004) found that performing squats required about 40% more time in lucid dreams than in the waking state. To find out if the task modality, the task length, or the task complexity results in prolonged times in lucid dreams, an experiment with three different conditions was conducted. In the first condition, five proficient lucid dreamers spent one to three non-consecutive nights in the sleep laboratory. Participants counted to 10, 20, and 30 in wakefulness and in their lucid dreams. Lucidity and task intervals were time stamped with left-right-left-right eye movements. The same procedure was used for these condition where eight lucid dreamers had to walk 10, 20, or 30 steps. In the third condition, eight lucid dreamers performed a gymnastics routine, which in the waking state lasted the same time as walking 10 steps. Again, we found that performing a motor task in a lucid dream requires more time than in wakefulness. Longer durations in the dream state were present for all three tasks, but significant differences were found only for the tasks with motor activity (walking and gymnastics). However, no difference was found for relative times (no disproportional time effects) and a more complex motor task did not result in more prolonged times. Longer durations in lucid dreams might be related to the lack of muscular feedback or slower neural processing during REM sleep. Future studies should explore factors that might be associated with prolonged durations.