368 resultados para PELLETRON ACCELERATORS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconfigurable computing devices can increase the performance of compute intensive algorithms by implementing application specific co-processor architectures. The power cost for this performance gain is often an order of magnitude less than that of modern CPUs and GPUs. Exploiting the potential of reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) is typically a complex and tedious hardware engineering task. Re- cently the major FPGA vendors (Altera, and Xilinx) have released their own high-level design tools, which have great potential for rapid development of FPGA based custom accelerators. In this paper, we will evaluate Altera’s OpenCL Software Development Kit, and Xilinx’s Vivado High Level Sythesis tool. These tools will be compared for their per- formance, logic utilisation, and ease of development for the test case of a Tri-diagonal linear system solver.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The goal of this work was to set out a methodology for measuring and reporting small field relative output and to assess the application of published correction factors across a population of linear accelerators. Methods and materials Measurements were made at 6 MV on five Varian iX accelerators using two PTW T60017 unshielded diodes. Relative output readings and profile measurements were made for nominal square field sizes of side 0.5 to 1.0 cm. The actual in-plane (A) and cross-plane (B) field widths were taken to be the FWHM at the 50% isodose level. An effective field size, defined as FSeff=A·B, was calculated and is presented as a field size metric. FSeffFSeff was used to linearly interpolate between published Monte Carlo (MC) calculated kQclin,Qmsrfclin,fmsr values to correct for the diode over-response in small fields. Results The relative output data reported as a function of the nominal field size were different across the accelerator population by up to nearly 10%. However, using the effective field size for reporting showed that the actual output ratios were consistent across the accelerator population to within the experimental uncertainty of ±1.0%. Correcting the measured relative output using kQclin,Qmsrfclin,fmsr at both the nominal and effective field sizes produce output factors that were not identical but differ by much less than the reported experimental and/or MC statistical uncertainties. Conclusions In general, the proposed methodology removes much of the ambiguity in reporting and interpreting small field dosimetric quantities and facilitates a clear dosimetric comparison across a population of linacs

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective Recently, Taylor et al. reported that use of the BrainLAB m3 microMLC, for stereotactic radiosurgery, results in a decreased out-of-field dose in the direction of leaf-motion compared to the outof- field dose measured in the direction orthogonal to leaf-motion [1]. It was recommended that, where possible, patients should be treated with their superior–inferior axes aligned with the microMLCs leafmotion direction, to minimise out-of-field doses [1]. This study aimed, therefore, to examine the causes of this asymmetry in outof- field dose and, in particular, to establish that a similar recommendation need not be made for radiotherapy treatments delivered by linear accelerators without external micro-collimation systems. Methods Monte Carlo simulations were used to study out-of-field dose from different linear accelerators (the Varian Clinacs 21iX and 600C and the Elekta Precise) with and without internal MLCs and external microMLCs [2]. Results Simulation results for the Varian Clinac 600C linear accelerator with BrainLAB m3 microMLC confirm Taylor et als [1] published experimental data. The out-of-field dose in the leaf motion direction is deposited by lower energy (more obliquely scattered) photons than the out-of-field dose in the orthogonal direction. Linear accelerators without microMLCs produce no asymmetry in out-offield dose. Conclusions The asymmetry in out-of-field dose previously measured by Taylor et al. [1] results from the shielding characteristics of the BrainLAB m3 microMLC device and is not produced by the linear accelerator to which it is attached.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controlled interaction of high-power pulsed electromagnetic radiation with plasma-exposed solid surfaces is a major challenge in applications spanning from electron beam accelerators in microwave electronics to pulsed laser ablation-assisted synthesis of nanomaterials. It is shown that the efficiency of such interaction can be potentially improved via an additional channel of wave power dissipation due to nonlinear excitation of two counterpropagating surface waves, resonant excitations of the plasma-solid system.Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent times, blended polymers have shown a lot of promise in terms of easy processability in different shapes and forms. In the present work, polyaniline emeraldine base (PANi-EB) was doped with camphor sulfonic acid (CSA) and combined with the conducting polymer polyfluorene (PF) as well as the insulating polymer polyvinyl chloride (PVC) to synthesize CSA doped PANi-PF and PANi-PVC blended polymers. It is well known that PANi when doped with CSA becomes highly conducting. However, its poor mechanical properties, such as low tensile, compressive, and flexural strength render PANi a non-ideal material to be processed for its various practical applications, such as electromagnetic shielding, anti-corrosion shielding, photolithography and microelectronic devices etc. Thus the search for polymers which are easily processable and are capable of showing high conductivity still continues. PANi-PVC blend was prepared, which showed low conductivity which is limiting factor for certain applications. Therefore, another processable polymer PF was chosen as conducting matrix. Conducting PF can be easily processed into various shapes and forms. Therefore, a blend mixture was prepared by using PANi and PF through the use of CSA as a counter ion which forms a "bridge" between the two polymeric components of the inter-polymer complex. Two blended polymers have been synthesized and investigated for their conductivity behaviour. It was observed that the blended film of CSA doped PANi-PVC showed a room temperature electrical conductivity of 2.8 × 10-7 S/cm where as the blended film made by CSA doped PANi with conducting polymer PF showed a room temperature conductivity of 1.3 × 10-5 S/cm. Blended films were irradiated with 100 MeV silicon ions with a view to increase their conductivity with a fluence ranging from 1011 ions to 1013 per cm2 from 15 UD Pelletron accelerator at NSC, New Delhi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate patient positioning is vital for improved clinical outcomes for cancer treatments using radiotherapy. This project has developed Mega Voltage Cone Beam CT using a standard medical linear accelerator to allow 3D imaging of the patient position at treatment time with no additional hardware required. Providing 3D imaging functionality at no further cost allows enhanced patient position verification on older linear accelerators and in developing countries where access to new technology is limited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In technicolor theories the scalar sector of the Standard Model is replaced by a strongly interacting sector. Although the Standard Model has been exceptionally successful, the scalar sector causes theoretical problems that make these theories seem an attractive alternative. I begin my thesis by considering QCD, which is the known example of strong interactions. The theory exhibits two phenomena: confinement and chiral symmetry breaking. I find the low-energy dynamics to be similar to that of the sigma models. Then I analyze the problems of the Standard Model Higgs sector, mainly the unnaturalness and triviality. Motivated by the example of QCD, I introduce the minimal technicolor model to resolve these problems. I demonstrate the minimal model to be free of anomalies and then deduce the main elements of its low-energy particle spectrum. I find the particle spectrum contains massless or very light technipions, and also technibaryons and techni-vector mesons with a high mass of over 1 TeV. Standard Model fermions remain strictly massless at this stage. Thus I introduce the technicolor companion theory of flavor, called extended technicolor. I show that the Standard Model fermions and technihadrons receive masses, but that they remain too light. I also discuss flavor-changing neutral currents and precision electroweak measurements. I then show that walking technicolor models partly solve these problems. In these models, contrary to QCD, the coupling evolves slowly over a large energy scale. This behavior adds to the masses so that even the light technihadrons are too heavy to be detected at current particle accelerators. Also all observed masses of the Standard Model particles can be generated, except for the bottom and top quarks. Thus it is shown in this thesis that, excluding the masses of third generation quarks, theories based on walking technicolor can in principle produce the observed particle spectrum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stochastic volatility models are of fundamental importance to the pricing of derivatives. One of the most commonly used models of stochastic volatility is the Heston Model in which the price and volatility of an asset evolve as a pair of coupled stochastic differential equations. The computation of asset prices and volatilities involves the simulation of many sample trajectories with conditioning. The problem is treated using the method of particle filtering. While the simulation of a shower of particles is computationally expensive, each particle behaves independently making such simulations ideal for massively parallel heterogeneous computing platforms. In this paper, we present our portable Opencl implementation of the Heston model and discuss its performance and efficiency characteristics on a range of architectures including Intel cpus, Nvidia gpus, and Intel Many-Integrated-Core (mic) accelerators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis acceleration of energetic particles at collisionless shock waves in space plasmas is studied using numerical simulations, with an emphasis on physical conditions applicable to the solar corona. The thesis consists of four research articles and an introductory part that summarises the main findings reached in the articles and discusses them with respect to theory of diffusive shock acceleration and observations. This thesis gives a brief review of observational properties of solar energetic particles and discusses a few open questions that are currently under active research. For example, in a few large gradual solar energetic particle events the heavy ion abundance ratios and average charge states show characteristics at high energies that are typically associated with flare-accelerated particles, i.e. impulsive events. The role of flare-accelerated particles in these and other gradual events has been discussed a lot in the scientific community, and it has been questioned if and how the observed features can be explained in terms of diffusive shock acceleration at shock waves driven by coronal mass ejections. The most extreme solar energetic particle events are the so-called ground level enhancements where particle receive so high energies that they can penetrate all the way through Earth's atmosphere and increase radiation levels at the surface. It is not known what conditions are required for acceleration into GeV/nuc energies, and the presence of both very fast coronal mass ejections and X-class solar flares makes it difficult to determine what is the role of these two accelerators in ground level enhancements. The theory of diffusive shock acceleration is reviewed and its predictions discussed with respect to the observed particle characteristics. We discuss how shock waves can be modeled and describe in detail the numerical model developed by the author. The main part of this thesis consists of the four scientific articles that are based on results of the numerical shock acceleration model developed by the author. The novel feature of this model is that it can handle complex magnetic geometries which are found, for example, near active regions in the solar corona. We show that, according to our simulations, diffusive shock acceleration can explain the observed variations in abundance ratios and average charge states, provided that suitable seed particles and magnetic geometry are available for the acceleration process in the solar corona. We also derive an injection threshold for diffusive shock acceleration that agrees with our simulation results very well, and which is valid under weakly turbulent conditions. Finally, we show that diffusive shock acceleration can produce GeV/nuc energies under suitable coronal conditions, which include the presence of energetic seed particles, a favourable magnetic geometry, and an enhanced level of ambient turbulence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A compact, high brightness 13.56 MHz inductively coupled plasma ion source without any axial or radial multicusp magnetic fields is designed for the production of a focused ion beam. Argon ion current of density more than 30 mA/cm(2) at 4 kV potential is extracted from this ion source and is characterized by measuring the ion energy spread and brightness. Ion energy spread is measured by a variable-focusing retarding field energy analyzer that minimizes the errors due t divergence of ion beam inside the analyzer. Brightness of the ion beam is determined from the emittance measured by a fully automated and locally developed electrostatic sweep scanner. By optimizing various ion source parameters such as RF power, gas pressure and Faraday shield, ion beams with energy spread of less than 5 eV and brightness of 7100 Am(-2)sr(-1)eV(-1) have been produced. Here, we briefly report the details of the ion source, measurement and optimization of energy spread and brightness of the ion beam. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.