989 resultados para Massive Parallelization
Resumo:
We construct a consistent theory of a quantum massive Weyl field. We start with the formulation of the classical field theory approach for the description of massive Weyl fields. It is demonstrated that the standard Lagrange formalism cannot be applied for the studies of massive first-quantized Weyl spinors. Nevertheless we show that the classical field theory description of massive Weyl fields can be implemented in frames of the Hamilton formalism or using the extended Lagrange formalism. Then we carry out a canonical quantization of the system. The independent ways for the quantization of a massive Weyl field are discussed. We also compare our results with the previous approaches for the treatment of massive Weyl spinors. Finally the new interpretation of the Majorana condition is proposed.
Resumo:
[EN]Longest edge (nested) algorithms for triangulation refinement in two dimensions are able to produce hierarchies of quality and nested irregular triangulations as needed both for adaptive finite element methods and for multigrid methods. They can be formulated in terms of the longest edge propagation path (Lepp) and terminal edge concepts, to refine the target triangles and some related neighbors. We discuss a parallel multithread algorithm, where every thread is in charge of refining a triangle t and its associated Lepp neighbors. The thread manages a changing Lepp(t) (ordered set of increasing triangles) both to find a last longest (terminal) edge and to refine the pair of triangles sharing this edge...
Resumo:
Seyfert galaxies are the closest active galactic nuclei. As such, we can use
them to test the physical properties of the entire class of objects. To investigate
their general properties, I took advantage of different methods of data analysis. In
particular I used three different samples of objects, that, despite frequent overlaps,
have been chosen to best tackle different topics: the heterogeneous BeppoS AX
sample was thought to be optimized to test the average hard X-ray (E above 10 keV)
properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to
compare the properties of low-luminosity sources to the ones of higher luminosity
and, thus, it was also used to test the emission mechanism models; finally, the
XMM–Newton sample was extracted from the X-CfA sample so as to ensure a
truly unbiased and well defined sample of objects to define the average properties
of Seyfert galaxies.
Taking advantage of the broad-band coverage of the BeppoS AX MECS and
PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (
Resumo:
Being basic ingredients of numerous daily-life products with significant industrial importance as well as basic building blocks for biomaterials, charged hydrogels continue to pose a series of unanswered challenges for scientists even after decades of practical applications and intensive research efforts. Despite a rather simple internal structure it is mainly the unique combination of short- and long-range forces which render scientific investigations of their characteristic properties to be quite difficult. Hence early on computer simulations were used to link analytical theory and empirical experiments, bridging the gap between the simplifying assumptions of the models and the complexity of real world measurements. Due to the immense numerical effort, even for high performance supercomputers, system sizes and time scales were rather restricted until recently, whereas it only now has become possible to also simulate a network of charged macromolecules. This is the topic of the presented thesis which investigates one of the fundamental and at the same time highly fascinating phenomenon of polymer research: The swelling behaviour of polyelectrolyte networks. For this an extensible simulation package for the research on soft matter systems, ESPResSo for short, was created which puts a particular emphasis on mesoscopic bead-spring-models of complex systems. Highly efficient algorithms and a consistent parallelization reduced the necessary computation time for solving equations of motion even in case of long-ranged electrostatics and large number of particles, allowing to tackle even expensive calculations and applications. Nevertheless, the program has a modular and simple structure, enabling a continuous process of adding new potentials, interactions, degrees of freedom, ensembles, and integrators, while staying easily accessible for newcomers due to a Tcl-script steering level controlling the C-implemented simulation core. Numerous analysis routines provide means to investigate system properties and observables on-the-fly. Even though analytical theories agreed on the modeling of networks in the past years, our numerical MD-simulations show that even in case of simple model systems fundamental theoretical assumptions no longer apply except for a small parameter regime, prohibiting correct predictions of observables. Applying a "microscopic" analysis of the isolated contributions of individual system components, one of the particular strengths of computer simulations, it was then possible to describe the behaviour of charged polymer networks at swelling equilibrium in good solvent and close to the Theta-point by introducing appropriate model modifications. This became possible by enhancing known simple scaling arguments with components deemed crucial in our detailed study, through which a generalized model could be constructed. Herewith an agreement of the final system volume of swollen polyelectrolyte gels with results of computer simulations could be shown successfully over the entire investigated range of parameters, for different network sizes, charge fractions, and interaction strengths. In addition, the "cell under tension" was presented as a self-regulating approach for predicting the amount of swelling based on the used system parameters only. Without the need for measured observables as input, minimizing the free energy alone already allows to determine the the equilibrium behaviour. In poor solvent the shape of the network chains changes considerably, as now their hydrophobicity counteracts the repulsion of like-wise charged monomers and pursues collapsing the polyelectrolytes. Depending on the chosen parameters a fragile balance emerges, giving rise to fascinating geometrical structures such as the so-called pear-necklaces. This behaviour, known from single chain polyelectrolytes under similar environmental conditions and also theoretically predicted, could be detected for the first time for networks as well. An analysis of the total structure factors confirmed first evidences for the existence of such structures found in experimental results.
Resumo:
Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.
Resumo:
Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
In this thesis two related arguments are investigated: - The first stages of the process of massive star formation, investigating the physical conditions and -properties of massive clumps in different evolutionary stages, and their CO depletion; - The influence that high-mass stars have on the nearby material and on the activity of star formation. I characterise the gas and dust temperature, mass and density of a sample of massive clumps, and analyse the variation of these properties from quiescent clumps, without any sign of active star formation, to clumps likely hosting a zero-age main sequence star. I briefly discuss CO depletion and recent observations of several molecular species, tracers of Hot Cores and/or shocked gas, of a subsample of these clumps. The issue of CO depletion is addressed in more detail in a larger sample consisting of the brightest sources in the ATLASGAL survey: using a radiative tranfer code I investigate how the depletion changes from dark clouds to more evolved objects, and compare its evolution to what happens in the low-mass regime. Finally, I derive the physical properties of the molecular gas in the photon-dominated region adjacent to the HII region G353.2+0.9 in the vicinity of Pismis 24, a young, massive cluster, containing some of the most massive and hottest stars known in our Galaxy. I derive the IMF of the cluster and study the star formation activity in its surroundings. Much of the data analysis is done with a Bayesian approach. Therefore, a separate chapter is dedicated to the concepts of Bayesian statistics.
Resumo:
The recent availability of multi-wavelength data revealed the presence of large reservoirs of warm and cold gas and dust in the innermost regions of the majority of massive elliptical galaxies. To prove an internal origin of cold and warm gas, the investigation of the spatially distributed cooling process which occurs because of non-linear density perturbations and subsequent thermal instabilities is of crucial importance. The first goal of this work of thesis is to investigate the internal origin of warm and cold phases. Numerical simulations are the powerful tool of analysis. The way in which a spatially distributed cooling process originates has been examined and the off-centre amount of gas mass which cools when different and differently characterized AGN feedback mechanisms operate has been quantified. This thesis demonstrates that the aforementioned non-linear density perturbations originate and develop from AGN feedback mechanisms in a natural fashion. An internal origin of the warm phase from the once hot gas is shown to be possible. Computed velocity dispersions of ionized and hot gas are similar. The cold gas as well can originate from the cooling process: indeed, it has been estimated that the surrounding stellar radiation, which is one of the most feasible sources of ionization of the warm gas, does not manage to keep ionized all the gas at 10^4 K. Therefore, cooled gas does undergo a further cooling which can lead the warm phase to lower temperatures. However, the gas which has cooled from the hot phase is expected to be dustless; nonetheless, a large fraction of early type galaxies has detectable dust in their cores, both concentrated in filamentary and disky structures and spread over larger regions. Therefore a regularly rotating disk of cold and dusty gas has been included in the simulations. A new quantitative investigation of the spatially distributed cooling process has therefore been essential: the contribution of the included amount of dust which is embedded in the cold gas does have a role in promoting and enhancing the cooling. The fate of dust which was at first embedded in cold gas has been investigated. The role of AGN feedback mechanisms in dragging (if able) cold and dusty gas from the core of massive ellipticals up to large radii has been studied.
Resumo:
Uno dei più importanti campi di ricerca che coinvolge gli astrofisici è la comprensione della Struttura a Grande Scala dell'universo. I principi della Formazione delle Strutture sono ormai ben saldi, e costituiscono la base del cosiddetto "Modello Cosmologico Standard". Fino agli inizi degli anni 2000, la teoria che spiegava con successo le proprietà statistiche dell'universo era la cosiddetta "Teoria Perturbativa Standard". Attraverso simulazioni numeriche e osservazioni di qualità migliore, si è evidenziato il limite di quest'ultima teoria nel descrivere il comportamento dello spettro di potenza su scale oltre il regime lineare. Ciò spinse i teorici a trovare un nuovo approccio perturbativo, in grado di estendere la validità dei risultati analitici. In questa Tesi si discutono le teorie "Renormalized Perturbation Theory"e"Multipoint Propagator". Queste nuove teorie perturbative sono la base teorica del codice BisTeCca, un codice numerico originale che permette il calcolo dello spettro di potenza a 2 loop e del bispettro a 1 loop in ordine perturbativo. Come esempio applicativo, abbiamo utilizzato BisTeCca per l'analisi dei bispettri in modelli di universo oltre la cosmologia standard LambdaCDM, introducendo una componente di neutrini massicci. Si mostrano infine gli effetti su spettro di potenza e bispettro, ottenuti col nostro codice BisTeCca, e si confrontano modelli di universo con diverse masse di neutrini.
Resumo:
Tesi su un esperimento per la realizzazione di una mappa cognitiva sul processo decisionale e gli aspetti cognitivi di una persona durante lo svolgimento di una partita a un videogioco di genere Moba. La tesi presente anche cenni di teoria sulle mappe cognitive, sulle misure di rete e sugli aspetti cognitivi dei videogiochi in generale.
Resumo:
To date, obesity affects a substantial population in industrialised countries. Due to the increased awareness of obesity-related morbidity, efficient dietary regimens and the recent successes with bariatric surgery, there is now a high demand for body contouring surgery to correct skin abundancies after massive weight loss. The known risks for this type of surgery are mainly wound-healing complications, and, more rarely, thromboembolic or respiratory complications. We present two female patients (23 and 39 years of age) who, in spite of standard positioning and precautions, developed sciatic neuropathy after combined body contouring procedures, including abdominoplasty and inner thigh lift. Complete functional loss of the sciatic nerve was found by clinical and electroneurographic examination on the left side in patient one and bilaterally in patient two. Full nerve conductance recovery was obtained after 6 months in both patients. Although the occurrence of spontaneous neuropathies after heavy weight loss is well documented, this is the first report describing the appearance of such a phenomenon following body contouring surgery. One theoretical explanation may be the compression of the nerve during the semirecumbent positioning combined with hip flexion and abduction, which was required for abdominal closure and simultaneous access to the inner thighs. We advise to avoid this positioning and to include the risk of sciatic neuropathy in the routine preoperative information of patients scheduled for body contouring surgery after heavy weight loss.
Resumo:
Early prediction of massive transfusion (MT) is critical in the management of severely injured trauma patients. Variables available early after injury including physiologic, laboratory, and rotation thromboelastometric (ROTEM) parameters were evaluated as predictors for the need of MT.
Resumo:
recombinant activated factor VII (rFVIIa) is used off-label for massive bleeding. There is no convincing evidence of the benefits of this practice and the minimal effective dose is unknown. The aim of the study was to evaluate our in-house guideline recommending a low dose of 60 μg/kg for off-label use of rFVIIa.