979 resultados para approximate calculation of sums
Resumo:
The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
The BLEVE, acronym for Boiling Liquid Expanding Vapour Explosion, is one of the most dangerous accidents that can occur in pressure vessels. It can be defined as an explosion resulting from the failure of a vessel containing a pressure liquefied gas stored at a temperature significantly above its boiling point at atmospheric pressure. This phenomenon frequently appears when a vessel is engulfed by a fire: the heat causes the internal pressure to raise and the mechanical proprieties of the wall to decrease, with the consequent rupture of the tank and the instantaneous release of its whole content. After the breakage, the vapour outflows and expands and the liquid phase starts boiling due to the pressure drop. The formation and propagation of a distructive schock wave may occur, together with the ejection of fragments, the generation of a fireball if the stored fluid is flammable and immediately ignited or the atmospheric dispersion of a toxic cloud if the fluid contained inside the vessel is toxic. Despite the presence of many studies on the BLEVE mechanism, the exact causes and conditions of its occurrence are still elusive. In order to better understand this phenomenon, in the present study first of all the concept and definition of BLEVE are investigated. A historical analysis of the major events that have occurred over the past 60 years is described. A research of the principal causes of this event, including the analysis of the substances most frequently involved, is presented too. Afterwards a description of the main effects of BLEVEs is reported, focusing especially on the overpressure. Though the major aim of the present thesis is to contribute, with a comparative analysis, to the validation of the main models present in the literature for the calculation and prediction of the overpressure caused by BLEVEs. In line with this purpose, after a short overview of the available approaches, their ability to reproduce the trend of the overpressure is investigated. The overpressure calculated with the different models is compared with values deriving from events happened in the past and ad-hoc experiments, focusing the attention especially on medium and large scale phenomena. The ability of the models to consider different filling levels of the reservoir and different substances is analyzed too. The results of these calculations are extensively discussed. Finally some conclusive remarks are reported.
Resumo:
The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.
Resumo:
Although the Monte Carlo (MC) method allows accurate dose calculation for proton radiotherapy, its usage is limited due to long computing time. In order to gain efficiency, a new macro MC (MMC) technique for proton dose calculations has been developed. The basic principle of the MMC transport is a local to global MC approach. The local simulations using GEANT4 consist of mono-energetic proton pencil beams impinging perpendicularly on slabs of different thicknesses and different materials (water, air, lung, adipose, muscle, spongiosa, cortical bone). During the local simulation multiple scattering, ionization as well as elastic and inelastic interactions have been taken into account and the physical characteristics such as lateral displacement, direction distributions and energy loss have been scored for primary and secondary particles. The scored data from appropriate slabs is then used for the stepwise transport of the protons in the MMC simulation while calculating the energy loss along the path between entrance and exit position. Additionally, based on local simulations the radiation transport of neutrons and the generated ions are included into the MMC simulations for the dose calculations. In order to validate the MMC transport, calculated dose distributions using the MMC transport and GEANT4 have been compared for different mono-energetic proton pencil beams impinging on different phantoms including homogeneous and inhomogeneous situations as well as on a patient CT scan. The agreement of calculated integral depth dose curves is better than 1% or 1 mm for all pencil beams and phantoms considered. For the dose profiles the agreement is within 1% or 1 mm in all phantoms for all energies and depths. The comparison of the dose distribution calculated using either GEANT4 or MMC in the patient also shows an agreement of within 1% or 1 mm. The efficiency of MMC is up to 200 times higher than for GEANT4. The very good level of agreement in the dose comparisons demonstrate that the newly developed MMC transport results in very accurate and efficient dose calculations for proton beams.
Resumo:
BACKGROUND: Ankle-brachial pressure index (ABI) is a simple, inexpensive, and useful tool in the detection of peripheral arterial occlusive disease (PAD). The current guidelines published by the American Heart Association define ABI as the quotient of the higher of the systolic blood pressures (SBPs) of the two ankle arteries of that limb (either the anterior tibial artery or the posterior tibial artery) and the higher of the two brachial SBPs of the upper limbs. We hypothesized that considering the lower of the two ankle arterial SBPs of a side as the numerator and the higher of the brachial SBPs as the denominator would increase its diagnostic yield. METHODS: The former method of eliciting ABI was termed as high ankle pressure (HAP) and the latter low ankle pressure (LAP). ABI was assessed in 216 subjects and calculated according to the HAP and the LAP method. ABI findings were confirmed by arterial duplex ultrasonography. A significant arterial stenosis was assumed if ABI was <0.9. RESULTS: LAP had a sensitivity of 0.89 and a specificity of 0.93. The HAP method had a sensitivity of 0.68 and a specificity of 0.99. McNemar's test to compare the results of both methods demonstrated a two-tailed P < .0001, indicating a highly significant difference between both measurement methods. CONCLUSIONS: LAP is the superior method of calculating ABI to identify PAD. This result is of great interest for epidemiologic studies applying ABI measurements to detect PAD and assessing patients' cardiovascular risk.
Resumo:
Approximate entropy (ApEn) of blood pressure (BP) can be easily measured based on software analysing 24-h ambulatory BP monitoring (ABPM), but the clinical value of this measure is unknown. In a prospective study we investigated whether ApEn of BP predicts, in addition to average and variability of BP, the risk of hypertensive crisis. In 57 patients with known hypertension we measured ApEn, average and variability of systolic and diastolic BP based on 24-h ABPM. Eight of these fifty-seven patients developed hypertensive crisis during follow-up (mean follow-up duration 726 days). In bivariate regression analysis, ApEn of systolic BP (P<0.01), average of systolic BP (P=0.02) and average of diastolic BP (P=0.03) were significant predictors of hypertensive crisis. The incidence rate ratio of hypertensive crisis was 14.0 (95% confidence interval (CI) 1.8, 631.5; P<0.01) for high ApEn of systolic BP as compared to low values. In multivariable regression analysis, ApEn of systolic (P=0.01) and average of diastolic BP (P<0.01) were independent predictors of hypertensive crisis. A combination of these two measures had a positive predictive value of 75%, and a negative predictive value of 91%, respectively. ApEn, combined with other measures of 24-h ABPM, is a potentially powerful predictor of hypertensive crisis. If confirmed in independent samples, these findings have major clinical implications since measures predicting the risk of hypertensive crisis define patients requiring intensive follow-up and intensified therapy.
Resumo:
Different codes are used for Monte Carlo (MC) calculations in radiation therapy. In this research, MCNP4C and GEANT3 codes have been compared in calculations of dosimetric characteristics of Varian Clinac 2300C/D. The parameters of influence in the differences seen in dosimetric features were discussed. This study emphasizes that both MCNP4C and GEANT3 MC can be used in radiation therapy computations and their differences in photon spectra calculations have a negligible effect on percentage depth dose computations in radiation therapy.
Resumo:
The article describes the results of fatigue tests with sideflexing polymer chains conducted on a dynamic testing machine and in testing conveyors. A new approach is suggested, that allows a calculatory estimation of the fatigue life of these chains. Finally, a calculation-software is presented, that has been developed based on the test results and the new equations.
Resumo:
In this paper a superelement formulation for geometric nonlinear finite element analysis is proposed. The element formulation is based on matrices generated by the static condensation algorithm. After defining the element characteristics, a method for the calculation of the element forces in a large displacement and rotation analysis is developed. In order to use the element in the solution of stability problems, the formulation of the geometric stiffness matrix is derived. An example shows the benefits of the element for the calculation of lattice-boom cranes.
Resumo:
The estimation of the average travel distance in a low-level picker-to-part order picking system can be done by analytical methods in most cases. Often a uniform distribution of the access frequency over all bin locations is assumed in the storage system. This only applies if the bin location assignment is done randomly. If the access frequency of the articles is considered in the bin location assignment to reduce the average total travel distance of the picker, the access frequency over the bin locations of one aisle can be approximated by an exponential density function or any similar density function. All known calculation methods assume that the average number of orderlines per order is greater than the number of aisles of the storage system. In case of small orders this assumption is often invalid. This paper shows a new approach for calculating the average total travel distance taking into account that the average number of orderlines per order is lower than the total number of aisles in the storage system and the access frequency over the bin locations of an aisle can be approximated by any density function.
Resumo:
Compliance with punctual delivery under the high pressure of costs can be implemented through the optimization of the in-house tool supply. Within the Transfer Project 13 of the Collaborative Research Centre 489 using the example of the forging industry, a mathematical model was developed which determines the minimum inventory of forging tools required for production, considering the tool appropriation delay.