9 resultados para Scaling laws

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

20.00% 20.00%

Publicador:

Resumo:

I have studied entropy profiles obtained in a sample of 24 X-ray objects at high redshift retrieved from the Chandra archive. I have discussed the scaling properties of the entropy S, the correlation between metallicity Z and S, the profiles of the temperature of the gas, Tgas, and performed a comparison between the dark matter 'temperature' and Tgas in order to constrain the non-gravitational processes which affect the thermal history of the gas. Furthermore I have studied the scaling relations between the X-ray quantities and Sunyaev Zel'dovich measurements. I have observed that X-ray laws are steeper than the relations predicted from the adiabatic model. These deviations from expectations based on self-similarity are usually interpreted in terms of feedback processes leading to non-gravitational gas heating, and suggesting a scenario in which the ICM at higher redshift has lower both X-ray luminosity and pressure in the central regions than the expectations from self-similar model. I have also investigated a Bayesian X-ray and Sunyaev Zel'dovich analysis, which allows to study the external regions of the clusters well beyond the volumes resolved with X-ray observations (1/3-1/2 of the virial radius), to measure the deprojected physical cluster properties, like temperature, density, entropy, gas mass and total mass up to the virial radius.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pursuer UAV tracking and loitering around a target is the problem analyzed in this thesis. The UAV is assumed to be a fixed-wing vehicle and constant airspeed together with bounded lateral accelerations are the main constraints of the problem. Three different guidance laws are designed for ensuring a continuos overfly on the target. Different proofs are presented to demonstrate the stability properties of the laws. All the algorithms are tested on a 6DoF Pioneer software simulator. Classic control design methods have been adopted to develop autopilots for implementig the simulation platform used for testing the guidance laws.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor nanowires (NWs) are one- or quasi one-dimensional systems whose physical properties are unique as compared to bulk materials because of their nanoscaled sizes. They bring together quantum world and semiconductor devices. NWs-based technologies may achieve an impact comparable to that of current microelectronic devices if new challenges will be faced. This thesis primarily focuses on two different, cutting-edge aspects of research over semiconductor NW arrays as pivotal components of NW-based devices. The first part deals with the characterization of electrically active defects in NWs. It has been elaborated the set-up of a general procedure which enables to employ Deep Level Transient Spectroscopy (DLTS) to probe NW arrays’ defects. This procedure has been applied to perform the characterization of a specific system, i.e. Reactive Ion Etched (RIE) silicon NW arrays-based Schottky barrier diodes. This study has allowed to shed light over how and if growth conditions introduce defects in RIE processed silicon NWs. The second part of this thesis concerns the bowing induced by electron beam and the subsequent clustering of gallium arsenide NWs. After a justified rejection of the mechanisms previously reported in literature, an original interpretation of the electron beam induced bending has been illustrated. Moreover, this thesis has successfully interpreted the formation of NW clusters in the framework of the lateral collapse of fibrillar structures. These latter are both idealized models and actual artificial structures used to study and to mimic the adhesion properties of natural surfaces in lizards and insects (Gecko effect). Our conclusion are that mechanical and surface properties of the NWs, together with the geometry of the NW arrays, play a key role in their post-growth alignment. The same parameters open, then, to the benign possibility of locally engineering NW arrays in micro- and macro-templates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent financial crisis triggered an increasing demand for financial regulation to counteract the potential negative economic effects of the evermore complex operations and instruments available on financial markets. As a result, insider trading regulation counts amongst the relatively recent but particularly active regulation battles in Europe and overseas. Claims for more transparency and equitable securities markets proliferate, ranging from concerns about investor protection to global market stability. The internationalization of the world’s securities market has challenged traditional notions of regulation and enforcement. Considering that insider trading is currently forbidden all over Europe, this study follows a law and economics approach in identifying how this prohibition should be enforced. More precisely, the study investigates first whether criminal law is necessary under all circumstances to enforce insider trading; second, if it should be introduced at EU level. This study provides evidence of law and economics theoretical logic underlying the legal mechanisms that guide sanctioning and public enforcement of the insider trading prohibition by identifying optimal forms, natures and types of sanctions that effectively induce insider trading deterrence. The analysis further aims to reveal the economic rationality that drives the potential need for harmonization of criminal enforcement of insider trading laws within the European environment by proceeding to a comparative analysis of the current legislations of height selected Member States. This work also assesses the European Union’s most recent initiative through a critical analysis of the proposal for a Directive on criminal sanctions for Market Abuse. Based on the conclusions drawn from its close analysis, the study takes on the challenge of analyzing whether or not the actual European public enforcement of the laws prohibiting insider trading is coherent with the theoretical law and economics recommendations, and how these enforcement practices could be improved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method for automatic scaling of oblique ionograms has been introduced. This method also provides a rejection procedure for ionograms that are considered to lack sufficient information, depicting a very good success rate. Observing the Kp index of each autoscaled ionogram, can be noticed that the behavior of the autoscaling program does not depend on geomagnetic conditions. The comparison between the values of the MUF provided by the presented software and those obtained by an experienced operator indicate that the procedure developed for detecting the nose of oblique ionogram traces is sufficiently efficient and becomes much more efficient as the quality of the ionograms improves. These results demonstrate the program allows the real-time evaluation of MUF values associated with a particular radio link through an oblique radio sounding. The automatic recognition of a part of the trace allows determine for certain frequencies, the time taken by the radio wave to travel the path between the transmitter and receiver. The reconstruction of the ionogram traces, suggests the possibility of estimating the electron density between the transmitter and the receiver, from an oblique ionogram. The showed results have been obtained with a ray-tracing procedure based on the integration of the eikonal equation and using an analytical ionospheric model with free parameters. This indicates the possibility of applying an adaptive model and a ray-tracing algorithm to estimate the electron density in the ionosphere between the transmitter and the receiver An additional study has been conducted on a high quality ionospheric soundings data set and another algorithm has been designed for the conversion of an oblique ionogram into a vertical one, using Martyn's theorem. This allows a further analysis of oblique soundings, throw the use of the INGV Autoscala program for the automatic scaling of vertical ionograms.