985 resultados para multiplier of convolution
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
The Rankin convolution type Dirichlet series D-F,D-G(s) of Siegel modular forms F and G of degree two, which was introduced by Kohnen and the second author, is computed numerically for various F and G. In particular, we prove that the series D-F,D-G(s), which shares the same functional equation and analytic behavior with the spinor L-functions of eigenforms of the same weight are not linear combinations of those. In order to conduct these experiments a numerical method to compute the Petersson scalar products of Jacobi Forms is developed and discussed in detail.
Resumo:
The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.
Resumo:
Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is designed that significantly reduces this variation for a particular pyramidal blurring technique. Moreover, the pyramidal blur algorithm is generalized to allow for a continuous variation of the blur width. Furthermore, an efficient implementation for programmable graphics hardware is presented. The proposed method is named “quasi-convolution pyramidal blurring” since the resulting effect is very close to image blurring based on a convolution filter for many applications.
Resumo:
The Liquid Argon Time Projection Chamber (LArTPC) is a prime type of detector for future large-mass neutrino observatories and proton decay searches. In this paper we present the design and operation, as well as experimental results from ARGONTUBE, a LArTPC being operated at the AEC-LHEP, University of Bern. The main goal of this detector is to prove the feasibility of charge drift over very long distances in liquid argon. Many other aspects of the LArTPC technology are also investigated, such as a voltage multiplier to generate high voltage in liquid argon (Greinacher circuit), a cryogenic purification system and the application of multi-photon ionization of liquid argon by a UV laser. For the first time, tracks induced by cosmic muons and UVlaser beam pulses have been observed and studied at drift distances of up to 5 m, the longest reached to date.
Resumo:
A three-dimensional model has been proposed that uses Monte Carlo and fast Fourier transform convolution techniques to calculate the dose distribution from a fast neutron beam. This method transports scattered neutrons and photons in the forward, lateral, and backward directions and protons, electrons, and positrons in the forward and lateral directions by convolving energy spread kernels with initial interaction available energy distributions. The primary neutron and photon spectrums have been derived from narrow beam attenuation measurements. The positions and strengths of the effective primary neutron, scattered neutron, and photon sources have been derived from dual ion chamber measurements. The size of the effective primary neutron source has been measured using a copper activation technique. Heterogeneous tissue calculations require a weighted sum of two convolutions for each component since the kernels must be invariant for FFT convolution. Comparisons between calculations and measurements were performed for several water and heterogeneous phantom geometries. ^
Resumo:
PURPOSE Hodgkin lymphoma (HL) is a highly curable disease. Reducing late complications and second malignancies has become increasingly important. Radiotherapy target paradigms are currently changing and radiotherapy techniques are evolving rapidly. DESIGN This overview reports to what extent target volume reduction in involved-node (IN) and advanced radiotherapy techniques, such as intensity-modulated radiotherapy (IMRT) and proton therapy-compared with involved-field (IF) and 3D radiotherapy (3D-RT)- can reduce high doses to organs at risk (OAR) and examines the issues that still remain open. RESULTS Although no comparison of all available techniques on identical patient datasets exists, clear patterns emerge. Advanced dose-calculation algorithms (e.g., convolution-superposition/Monte Carlo) should be used in mediastinal HL. INRT consistently reduces treated volumes when compared with IFRT with the exact amount depending on the INRT definition. The number of patients that might significantly benefit from highly conformal techniques such as IMRT over 3D-RT regarding high-dose exposure to organs at risk (OAR) is smaller with INRT. The impact of larger volumes treated with low doses in advanced techniques is unclear. The type of IMRT used (static/rotational) is of minor importance. All advanced photon techniques result in similar potential benefits and disadvantages, therefore only the degree-of-modulation should be chosen based on individual treatment goals. Treatment in deep inspiration breath hold is being evaluated. Protons theoretically provide both excellent high-dose conformality and reduced integral dose. CONCLUSION Further reduction of treated volumes most effectively reduces OAR dose, most likely without disadvantages if the excellent control rates achieved currently are maintained. For both IFRT and INRT, the benefits of advanced radiotherapy techniques depend on the individual patient/target geometry. Their use should therefore be decided case by case with comparative treatment planning.
Resumo:
ARGONTUBE is a liquid argon time projection chamber (LAr TPC) with a drift field generated in-situ by a Greinacher voltage multiplier circuit. We present results on the measurement of the drift-field distribution inside ARGONTUBE using straight ionization tracks generated by an intense UV laser beam. Our analysis is based on a simplified model of the charging of a multi-stage Greinacher circuit to describe the voltages on the field cage rings.
Resumo:
Background: The Swiss pig population enjoys a favourable health situation. To further promote this, the Pig Health Service (PHS) conducts a surveillance program in affiliated herds: closed multiplier herds with the highest PHS-health and hygiene status have to be free from swine dysentery and progressive atrophic rhinitis and are clinically examined four times a year, including laboratory testing. Besides, four batches of pigs per year are fattened together with pigs from other herds and checked for typical symptoms (monitored fattening groups (MF)). While costly and laborious, little was known about the effectiveness of the surveillance to detect an infection in a herd. Therefore, the sensitivity of the surveillance for progressive atrophic rhinitis and swine dysentery at herd level was assessed using scenario tree modelling, a method well established at national level. Furthermore, its costs and the time until an infection would be detected were estimated, with the final aim of yielding suggestions how to optimize surveillance. Results: For swine dysentery, the median annual surveillance sensitivity was 96.7 %, mean time to detection 4.4 months, and total annual costs 1022.20 Euro/herd. The median component sensitivity of active sampling was between 62.5 and 77.0 %, that of a MF between 7.2 and 12.7 %. For progressive atrophic rhinitis, the median surveillance sensitivity was 99.4 %, mean time to detection 3.1 months and total annual costs 842.20 Euro. The median component sensitivity of active sampling was 81.7 %, that of a MF between 19.4 and 38.6 %. Conclusions: Results indicate that total sensitivity for both diseases is high, while time to detection could be a risk in herds with frequent pig trade. From all components, active sampling had the highest contribution to the surveillance sensitivity, whereas that of MF was very low. To increase efficiency, active sampling should be intensified (more animals sampled) and MF abandoned. This would significantly improve sensitivity and time to detection at comparable or lower costs. The method of scenario tree modelling proved useful to assess the efficiency of surveillance at herd level. Its versatility allows adjustment to all kinds of surveillance scenarios to optimize sensitivity, time to detection and/or costs.
Resumo:
Using the directional distance function we study a cross section of 110 countries to examine the efficiency of management of the tradeoffs between pollution and income. The DEA model is reformulated to permit 'reverse disposability' of the bad output. Further, we interpret the optimal solution of the multiplier form of the DEA model as an iso-inefficiency line. This permits us to measure the shadow cost of the bad output for a country that is in the interior, rather than on the frontier of the production possibilities set. We also compare the relative environmental performance of countries in terms of emission intensity adjusted for technical efficiency. Only 10% of the countries are found to be on the frontier. Also, there is considerable inter-country variation in the imputed opportunity cost of CO2 reduction. Further, differences in technical efficiency contribute substantially to differences in the observed levels of CO2 intensity.
Resumo:
The effectiveness of the Anisotropic Analytical Algorithm (AAA) implemented in the Eclipse treatment planning system (TPS) was evaluated using theRadiologicalPhysicsCenteranthropomorphic lung phantom using both flattened and flattening-filter-free high energy beams. Radiation treatment plans were developed following the Radiation Therapy Oncology Group and theRadiologicalPhysicsCenterguidelines for lung treatment using Stereotactic Radiation Body Therapy. The tumor was covered such that at least 95% of Planning Target Volume (PTV) received 100% of the prescribed dose while ensuring that normal tissue constraints were followed as well. Calculated doses were exported from the Eclipse TPS and compared with the experimental data as measured using thermoluminescence detectors (TLD) and radiochromic films that were placed inside the phantom. The results demonstrate that the AAA superposition-convolution algorithm is able to calculate SBRT treatment plans with all clinically used photon beams in the range from 6 MV to 18 MV. The measured dose distribution showed a good agreement with the calculated distribution using clinically acceptable criteria of ±5% dose or 3mm distance to agreement. These results show that in a heterogeneous environment a 3D pencil beam superposition-convolution algorithms with Monte Carlo pre-calculated scatter kernels, such as AAA, are able to reliably calculate dose, accounting for increased lateral scattering due to the loss of electronic equilibrium in low density medium. The data for high energy plans (15 MV and 18 MV) showed very good tumor coverage in contrast to findings by other investigators for less sophisticated dose calculation algorithms, which demonstrated less than expected tumor doses and generally worse tumor coverage for high energy plans compared to 6MV plans. This demonstrates that the modern superposition-convolution AAA algorithm is a significant improvement over previous algorithms and is able to calculate doses accurately for SBRT treatment plans in the highly heterogeneous environment of the thorax for both lower (≤12 MV) and higher (greater than 12 MV) beam energies.
Resumo:
Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an $\sp{131}$I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of $-$16.3% to 4.4%. Volume quantitation errors ranged from $-$4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3-D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues. ^
Resumo:
Structural decomposition techniques based on input-output table have become a widely used tool for analyzing long term economic growth. However, due to limitations of data, such techniques have never been applied to China's regional economies. Fortunately, in 2003, China's Interregional Input-Output Table for 1987 and Multi-regional Input-Output Table for 1997 were published, making decomposition analysis of China's regional economies possible. This paper first estimates the interregional input-output table in constant price by using an alternative approach: the Grid-Search method, and then applies the standard input-output decomposition technique to China's regional economies for 1987-97. Based on the decomposition results, the contributions to output growth of different factors are summarized at the regional and industrial level. Furthermore, interdependence between China's regional economies is measured and explained by aggregating the decomposition factors into the intraregional multiplier-related effect, the feedback-related effect, and the spillover-related effect. Finally, the performance of China's industrial and regional development policies implemented in the 1990s is briefly discussed based on the analytical results of the paper.
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.