18 resultados para Decomposition of Ranked Models

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uno dei più importanti campi di ricerca che coinvolge gli astrofisici è la comprensione della Struttura a Grande Scala dell'universo. I principi della Formazione delle Strutture sono ormai ben saldi, e costituiscono la base del cosiddetto "Modello Cosmologico Standard". Fino agli inizi degli anni 2000, la teoria che spiegava con successo le proprietà statistiche dell'universo era la cosiddetta "Teoria Perturbativa Standard". Attraverso simulazioni numeriche e osservazioni di qualità migliore, si è evidenziato il limite di quest'ultima teoria nel descrivere il comportamento dello spettro di potenza su scale oltre il regime lineare. Ciò spinse i teorici a trovare un nuovo approccio perturbativo, in grado di estendere la validità dei risultati analitici. In questa Tesi si discutono le teorie "Renormalized Perturbation Theory"e"Multipoint Propagator". Queste nuove teorie perturbative sono la base teorica del codice BisTeCca, un codice numerico originale che permette il calcolo dello spettro di potenza a 2 loop e del bispettro a 1 loop in ordine perturbativo. Come esempio applicativo, abbiamo utilizzato BisTeCca per l'analisi dei bispettri in modelli di universo oltre la cosmologia standard LambdaCDM, introducendo una componente di neutrini massicci. Si mostrano infine gli effetti su spettro di potenza e bispettro, ottenuti col nostro codice BisTeCca, e si confrontano modelli di universo con diverse masse di neutrini.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scopo di questa tesi é di evidenziare le connessioni tra le categorie monoidali, l'equazione di Yang-Baxter e l’integrabilità di alcuni modelli. Oggetto prinacipale del nostro lavoro é stato il monoide di Frobenius e come sia connesso alle C∗-algebre. In questo contesto la totalità delle dimostrazioni sfruttano la strumentazione dell'algebra diagrammatica. Nel corso del lavoro di tesi sono state riprodotte tali dimostrazioni tramite il più familiare linguaggio dell’algebra multilineare allo scopo di rendere più fruibili questi risultati ad un raggio più ampio di potenziali lettori.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monomer-dimer models are amongst the models in statistical mechanics which found application in many areas of science, ranging from biology to social sciences. This model describes a many-body system in which monoatomic and diatomic particles subject to hard-core interactions get deposited on a graph. In our work we provide an extension of this model to higher-order particles. The aim of our work is threefold: first we study the thermodynamic properties of the newly introduced model. We solve analytically some regular cases and find that, differently from the original, our extension admits phase transitions. Then we tackle the inverse problem, both from an analytical and numerical perspective. Finally we propose an application to aggregation phenomena in virtual messaging services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Upgrade of hydrogen to valuable fuel is a central topic in modern research due to its high availability and low price. For the difficulties in hydrogen storage, different pathways are still under investigation. A promising way is in the liquid-phase chemical hydrogen storage materials, because they can lead to greener transformation processes with the on line development of hydrogen for fuel cells. The aim of my work was the optimization of catalysts for the decomposition of formic acid made by sol immobilisation method (a typical colloidal method). Formic acid was selected because of the following features: it is a versatile renewable reagent for green synthesis studies. The first aim of my research was the synthesis and optimisation of Pd nanoparticles by sol-immobilisation to achieve better catalytic performances and investigate the effect of particle size, oxidation state, role of stabiliser and nature of the support. Palladium was chosen because it is a well-known active metal for the catalytic decomposition of formic acid. Noble metal nanoparticles of palladium were immobilized on carbon charcoal and on titania. In the second part the catalytic performance of the “homemade” catalyst Pd/C to a commercial Pd/C and the effect of different monometallic and bimetallic systems (AuxPdy) in the catalytic formic acid decomposition was investigated. The training period for the production of this work was carried out at the University of Cardiff (Group of Dr. N. Dimitratos).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nell'ambito della loro trasformazione digitale, molte organizzazioni stanno adottando nuove tecnologie per supportare lo sviluppo, l'implementazione e la gestione delle proprie architetture basate su microservizi negli ambienti cloud e tra i fornitori di cloud. In questo scenario, le service ed event mesh stanno emergendo come livelli infrastrutturali dinamici e configurabili che facilitano interazioni complesse e la gestione di applicazioni basate su microservizi e servizi cloud. L’obiettivo di questo lavoro è quello di analizzare soluzioni mesh open-source (istio, Linkerd, Apache EventMesh) dal punto di vista delle prestazioni, quando usate per gestire la comunicazione tra applicazioni a workflow basate su microservizi all’interno dell’ambiente cloud. A questo scopo è stato realizzato un sistema per eseguire il dislocamento di ognuno dei componenti all’interno di un cluster singolo e in un ambiente multi-cluster. La raccolta delle metriche e la loro sintesi è stata realizzata con un sistema personalizzato, compatibile con il formato dei dati di Prometheus. I test ci hanno permesso di valutare le prestazioni di ogni componente insieme alla sua efficacia. In generale, mentre si è potuta accertare la maturità delle implementazioni di service mesh testate, la soluzione di event mesh da noi usata è apparsa come una tecnologia ancora non matura, a causa di numerosi problemi di funzionamento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the tides of a celestial bodies can unveil important information about their interior as well as their orbital evolution. The most important tidal parameter is the Love number, which defines the deformation of the gravity field due to an external perturbing body. Tidal dissipation is very important because it drives the secular orbital evolution of the natural satellites, which is even more important in the case of the the Jupiter system, where three of the Galilean moons, Io, Europa and Ganymede, are locked in an orbital resonance where the ratio of their mean motions is 4:2:1. This is called Laplace resonance. Tidal dissipation is described by the dissipation ratio k2/Q, where Q is the quality factor and it describes the dampening of a system. The goal of this thesis is to analyze and compare the two main tidal dynamical models, Mignard's model and gravity field variation model, to understand the differences between each model with a main focus on the single-moon case with Io, which can help also understanding better the differences between the two models without over complicating the dynamical model. In this work we have verified and validated both models, we have compared them and pinpointed the main differences and features that characterize each model. Mignard's model treats the tides directly as a force, while the gravity field variation model describes the tides with a change of the spherical harmonic coefficients. Finally, we have also briefly analyzed the difference between the single-moon case and the two-moon case, and we have confirmed that the governing equations that describe the change of semi-major axis and eccentricity are not good anymore when more moons are present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The decomposition of Feynman integrals into a basis of independent master integrals is an essential ingredient of high-precision theoretical predictions, that often represents a major bottleneck when processes with a high number of loops and legs are involved. In this thesis we present a new algorithm for the decomposition of Feynman integrals into master integrals with the formalism of intersection theory. Intersection theory is a novel approach that allows to decompose Feynman integrals into master integrals via projections, based on a scalar product between Feynman integrals called intersection number. We propose a new purely rational algorithm for the calculation of intersection numbers of differential $n-$forms that avoids the presence of algebraic extensions. We show how expansions around non-rational poles, which are a bottleneck of existing algorithms for intersection numbers, can be avoided by performing an expansion in series around a rational polynomial irreducible over $\mathbb{Q}$, that we refer to as $p(z)-$adic expansion. The algorithm we developed has been implemented and tested on several diagrams, both at one and two loops.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La colonna vertebrale è uno dei principali siti per lo sviluppo delle metastasi ossee. Esse modificano le proprietà meccaniche della vertebra indebolendo la struttura e inducendo l’instabilità spinale. La medicina in silico e i modelli agli elementi finiti (FE) hanno trovato spazio nello studio del comportamento meccanico delle vertebre, permettendo una valutazione delle loro proprietà meccaniche anche in presenza di metastasi. In questo studio ho validato i campi di spostamento predetti da modelli microFE di vertebre umane, con e senza metastasi, rispetto agli spostamenti misurati mediante Digital Volume Correlation (DVC). Sono stati utilizzati 4 provini da donatore umano, ognuno composto da una vertebra sana e da una vertebra con metastasi litica. Per ogni vertebra è stato sviluppato un modello microFE omogeneo, lineare e isotropo basato su sequenze di immagini ad alta risoluzione ottenute con microCT (voxel size = 39 μm). Sono state imposte come condizioni al contorno gli spostamenti ottenuti con la DVC nelle fette prossimali e distali di ogni vertebra. I modelli microFE hanno mostrato buone capacità predittive degli spostamenti interni sia per le vertebre di controllo che per quelle metastatiche. Per range di spostamento superiori a 100 μm, il valore di R2 è risultato compreso tra 0.70 e 0.99 e il valore di RMSE% tra 1.01% e 21.88%. Dalle analisi dei campi di deformazione predetti dai modelli microFE sono state evidenziate regioni a maggior deformazione nelle vertebre metastatiche, in particolare in prossimità delle lesioni. Questi risultati sono in accordo con le misure sperimentali effettuate con la DVC. Si può assumere quindi che il modello microFE lineare omogeneo isotropo in campo elastico produca risultati attendibili sia per le vertebre sane sia per le vertebre metastatiche. La procedura di validazione implementata potrebbe essere utilizzata per approfondire lo studio delle proprietà meccaniche delle lesioni blastiche.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban systems consist of several interlinked sub-systems - social, economic, institutional and environmental – each representing a complex system of its own and affecting all the others at various structural and functional levels. An urban system is represented by a number of “human” agents, such as individuals and households, and “non-human” agents, such as buildings, establishments, transports, vehicles and infrastructures. These two categories of agents interact among them and simultaneously produce impact on the system they interact with. Try to understand the type of interactions, their spatial and temporal localisation to allow a very detailed simulation trough models, turn out to be a great effort and is the topic this research deals with. An analysis of urban system complexity is here presented and a state of the art review about the field of urban models is provided. Finally, six international models - MATSim, MobiSim, ANTONIN, TRANSIMS, UrbanSim, ILUTE - are illustrated and then compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The BLEVE, acronym for Boiling Liquid Expanding Vapour Explosion, is one of the most dangerous accidents that can occur in pressure vessels. It can be defined as an explosion resulting from the failure of a vessel containing a pressure liquefied gas stored at a temperature significantly above its boiling point at atmospheric pressure. This phenomenon frequently appears when a vessel is engulfed by a fire: the heat causes the internal pressure to raise and the mechanical proprieties of the wall to decrease, with the consequent rupture of the tank and the instantaneous release of its whole content. After the breakage, the vapour outflows and expands and the liquid phase starts boiling due to the pressure drop. The formation and propagation of a distructive schock wave may occur, together with the ejection of fragments, the generation of a fireball if the stored fluid is flammable and immediately ignited or the atmospheric dispersion of a toxic cloud if the fluid contained inside the vessel is toxic. Despite the presence of many studies on the BLEVE mechanism, the exact causes and conditions of its occurrence are still elusive. In order to better understand this phenomenon, in the present study first of all the concept and definition of BLEVE are investigated. A historical analysis of the major events that have occurred over the past 60 years is described. A research of the principal causes of this event, including the analysis of the substances most frequently involved, is presented too. Afterwards a description of the main effects of BLEVEs is reported, focusing especially on the overpressure. Though the major aim of the present thesis is to contribute, with a comparative analysis, to the validation of the main models present in the literature for the calculation and prediction of the overpressure caused by BLEVEs. In line with this purpose, after a short overview of the available approaches, their ability to reproduce the trend of the overpressure is investigated. The overpressure calculated with the different models is compared with values deriving from events happened in the past and ad-hoc experiments, focusing the attention especially on medium and large scale phenomena. The ability of the models to consider different filling levels of the reservoir and different substances is analyzed too. The results of these calculations are extensively discussed. Finally some conclusive remarks are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following thesis work focuses on the use and implementation of advanced models for measuring the resilience of water distribution networks. In particular, the functions implemented in GRA Tool, a software developed by the University of Exeter (UK), and the functions of the Toolkit of Epanet 2.2 were investigated. The study of the resilience and failure, obtained through GRA Tool and the development of the methodology based on the combined use of EPANET 2.2 and MATLAB software, was tested in a first phase, on a small-sized literature water distribution network, so that the variability of the results could be perceived more clearly and with greater immediacy, and then, on a more complex network, that of Modena. In the specific, it has been decided to go to recreate a mode of failure deferred in time, one proposed by the software GRA Tool, that is failure to the pipes, to make a comparison between the two methodologies. The analysis of hydraulic efficiency was conducted using a synthetic and global network performance index, i.e., Resilience index, introduced by Todini in the years 2000-2016. In fact, this index, being one of the parameters with which to evaluate the overall state of "hydraulic well-being" of a network, has the advantage of being able to act as a criterion for selecting any improvements to be made on the network itself. Furthermore, during these analyzes, was shown the analytical development undergone over time by the formula of the Resilience Index. The final intent of this thesis work was to understand by what means to improve the resilience of the system in question, as the introduction of the scenario linked to the rupture of the pipelines was designed to be able to identify the most problematic branches, i.e., those that in the event of a failure it would entail greater damage to the network, including lowering the Resilience Index.