914 resultados para Multi-phase Modelling
Resumo:
The aim of the work is to conduct a finite element model analysis on a small – size concrete beam and on a full size concrete beam internally reinforced with BFRP exposed at elevated temperatures. Experimental tests performed at Kingston University have been used to compare the results from the numerical analysis for the small – size concrete beam. Once the behavior of the small – size beam at room temperature is investigated and switching to the heating phase reinforced beams are tested at 100°C, 200°C and 300°C in loaded condition. The aim of the finite element analysis is to reflect the three – point bending test adopted into the oven during the exposure of the beam at room temperature and at elevated temperatures. Performance and deformability of reinforced beams are straightly correlated to the material properties and a wide analysis on elastic modulus and coefficient of thermal expansion is given in this work. Develop a good correlation between the numerical model and the experimental test is the main objective of the analysis on the small – size concrete beam, for both modelling the aim is also to estimate which is the deterioration of the material properties due to the heating process and the influence of different parameters on the final result. The focus of the full – size modelling which involved the last part of this work is to evaluate the effect of elevated temperatures, the material deterioration and the deflection trend on a reinforced beam characterized by a different size. A comparison between the results from different modelling has been developed.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
Sports biomechanics describes human movement from a performance enhancement and an injury reduction perspective. In this respect, the purpose of sports scientists is to support coaches and physicians with reliable information about athletes’ technique. The lack of methods allowing for in-field athlete evaluation as well as for accurate joint force estimates represents, to date, the main limitation to this purpose. The investigations illustrated in the present thesis aimed at providing a contribution towards the development of the above mentioned methods. Two complementary approaches were adopted: a Low Resolution Approach – related to performance assessment – where the use of wearable inertial measurement units is exploited during different phases of sprint running, and a High Resolution Approach – related to joint kinetics estimate for injury prevention – where subject-specific, non-rigid constraints for knee joint kinematic modelling used in multi-body optimization techniques are defined. Results obtained using the Low Resolution Approach indicated that, due to their portability and inexpensiveness, inertial measurement systems are a valid alternative to laboratory-based instrumentation for in-field performance evaluation of sprint running. Using acceleration and angular velocity data, the following quantities were estimated: trunk inclination and angular velocity, instantaneous horizontal velocity and displacement of a point approximating the centre of mass, and stride and support phase durations. As concerns the High Resolution Approach, results indicated that the length of the anterior cruciate and lateral collateral ligaments decreased, while that of the deep bundle of the medial collateral ligament increased significantly during flexion. Variations of the posterior cruciate and the superficial bundle of the medial collateral ligament lengths were concealed by the experimental indeterminacy. A mathematical model was provided that allowed the estimate of subject-specific ligament lengths as a function of knee flexion and that can be integrated in a multi-body optimization procedure.
Resumo:
This PhD Thesis is devoted to the accurate analysis of the physical properties of Active Galactic Nuclei (AGN) and the AGN/host-galaxy interplay. Due to the broad-band AGN emission (from radio to hard X-rays), a multi-wavelength approach is mandatory. Our research is carried out over the COSMOS field, within the context of the XMM-Newton wide-field survey. To date, the COSMOS field is a unique area for comprehensive multi-wavelength studies, allowing us to define a large and homogeneous sample of QSOs with a well-sampled spectral coverage and to keep selection effects under control. Moreover, the broad-band information contained in the COSMOS database is well-suited for a detailed analysis of AGN SEDs, bolometric luminosities and bolometric corrections. In order to investigate the nature of both obscured (Type-2) and unobscured (Type-1) AGN, the observational approach is complemented with a theoretical modelling of the AGN/galaxy co-evolution. The X-ray to optical properties of an X-ray selected Type-1 AGN sample are discussed in the first part. The relationship between X-ray and optical/UV luminosities, parametrized by the spectral index αox, provides a first indication about the nature of the central engine powering the AGN. Since a Type-1 AGN outshines the surrounding environment, it is extremely difficult to constrain the properties of its host-galaxy. Conversely, in Type-2 AGN the host-galaxy light is the dominant component of the optical/near-IR SEDs, severely affecting the recovery of the intrinsic AGN emission. Hence a multi-component SED-fitting code is developed to disentangle the emission of the stellar populationof the galaxy from that associated with mass accretion. Bolometric corrections, luminosities, stellar masses and star-formation rates, correlated with the morphology of Type-2 AGN hosts, are presented in the second part, while the final part concerns a physically-motivated model for the evolution of spheroidal galaxies with a central SMBH. The model is able to reproduce two important stages of galaxy evolution, namely the obscured cold-phase and the subsequent quiescent hot-phase.
Resumo:
The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.
Resumo:
The Capacitated Location-Routing Problem (CLRP) is a NP-hard problem since it generalizes two well known NP-hard problems: the Capacitated Facility Location Problem (CFLP) and the Capacitated Vehicle Routing Problem (CVRP). The Multi-Depot Vehicle Routing Problem (MDVRP) is known to be a NP-hard since it is a generalization of the well known Vehicle Routing Problem (VRP), arising with one depot. This thesis addresses heuristics algorithms based on the well-know granular search idea introduced by Toth and Vigo (2003) to solve the CLRP and the MDVRP. Extensive computational experiments on benchmark instances for both problems have been performed to determine the effectiveness of the proposed algorithms. This work is organized as follows: Chapter 1 describes a detailed overview and a methodological review of the literature for the the Capacitated Location-Routing Problem (CLRP) and the Multi-Depot Vehicle Routing Problem (MDVRP). Chapter 2 describes a two-phase hybrid heuristic algorithm to solve the CLRP. Chapter 3 shows a computational comparison of heuristic algorithms for the CLRP. Chapter 4 presents a hybrid granular tabu search approach for solving the MDVRP.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
An extensive study of the morphology and the dynamics of the equatorial ionosphere over South America is presented here. A multi parametric approach is used to describe the physical characteristics of the ionosphere in the regions where the combination of the thermospheric electric field and the horizontal geomagnetic field creates the so-called Equatorial Ionization Anomalies. Ground based measurements from GNSS receivers are used to link the Total Electron Content (TEC), its spatial gradients and the phenomenon known as scintillation that can lead to a GNSS signal degradation or even to a GNSS signal ‘loss of lock’. A new algorithm to highlight the features characterizing the TEC distribution is developed in the framework of this thesis and the results obtained are validated and used to improve the performance of a GNSS positioning technique (long baseline RTK). In addition, the correlation between scintillation and dynamics of the ionospheric irregularities is investigated. By means of a software, here implemented, the velocity of the ionospheric irregularities is evaluated using high sampling rate GNSS measurements. The results highlight the parallel behaviour of the amplitude scintillation index (S4) occurrence and the zonal velocity of the ionospheric irregularities at least during severe scintillations conditions (post-sunset hours). This suggests that scintillations are driven by TEC gradients as well as by the dynamics of the ionospheric plasma. Finally, given the importance of such studies for technological applications (e.g. GNSS high-precision applications), a validation of the NeQuick model (i.e. the model used in the new GALILEO satellites for TEC modelling) is performed. The NeQuick performance dramatically improves when data from HF radar sounding (ionograms) are ingested. A custom designed algorithm, based on the image recognition technique, is developed to properly select the ingested data, leading to further improvement of the NeQuick performance.
Resumo:
This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.
Resumo:
In questa tesi si sono valutate le prestazioni di un sistema di localizzazione multi-antenna di tag radio frequency identification (RFID) passivi in ambiente indoor. Il sistema, composto da un reader in movimento che percorre una traiettoria nota, ha come obiettivo localizzare il tag attraverso misure di fase; più precisamente la differenza di fase tra il segnale di interrogazione, emesso dal reader, e il segnale ricevuto riflesso dal tag che è correlato alla distanza tra di essi. Dopo avere eseguito una ricerca sullo stato dell’arte di queste tecniche e aver derivato il criterio maximum likelihood (ML) del sistema si è proceduto a valutarne le prestazioni e come eventuali fattori agissero sul risultato di localizzazione attraverso simulazioni Matlab. Come ultimo passo si è proceduto a effettuare una campagna di misure, testando il sistema in un ambiente reale. Si sono confrontati i risultati di localizzazione di tutti gli algoritmi proposti quando il reader si muove su una traiettoria rettilinea e su una traiettoria angolare, cercando di capire come migliorare i risultati.
Resumo:
Galina Kovaleva. The Formation of the Exchange Rate on the Russian Market: Dynamics and Modelling. The Russian financial market is fast becoming one of the major sectors of the Russian economy. Assets have been increasing steadily, while new market segments and new financial market instruments have emerged. Kovaleva attempted to isolate the factors influencing exchange rates, determine patterns in the dynamic changes to the rouble/dollar exchange rate, construct models of the processes, and on the basis of these activities make forecasts. She studied the significance of economic indicators influencing the rouble/dollar exchange rate at different times, and developed multi-factor econometric models. In order to reveal the inner structure of the financial indicators and to work out ex-post forecasts for different time intervals, she carried out a series of calculations with the aim of constructing trend-cyclical (TC) and harmonic models, and Box and Jenkins models. She found that: 1. The Russian financial market is dependant on the rouble/dollar exchange rate. Its dynamics are formed under the influence of the short-term state treasury notes and government bonds markets, interbank loans, the rouble/DM exchange rate, the inflation rate, and the DM/dollar exchange rate. The exchange rate is influenced by sales on the Moscow Interbank Currency Exchange and the mechanism of those sales. 2. The TC model makes it possible to conduct an in-depth study of the structure of the processes and to make forecasts of the dynamic changes to currency indicators. 3. The Russian market is increasingly influenced by the world currency market and its prospects are of crucial interest for the world financial community.
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
PURPOSE: To prospectively determine quantitatively and qualitatively the timing of maximal enhancement of the normal small-bowel wall by using contrast material-enhanced multi-detector row computed tomography (CT). MATERIALS AND METHODS: This HIPAA-compliant study was approved by the institutional review board. After information on radiation risk was given, written informed consent was obtained from 25 participants with no history of small-bowel disease (mean age, 58 years; 19 men) who had undergone single-level dynamic CT. Thirty seconds after the intravenous administration of contrast material, a serial dynamic acquisition, consisting of 10 images obtained 5 seconds apart, was performed. Enhancement measurements were obtained over time from the small-bowel wall and the aorta. Three independent readers qualitatively assessed small-bowel conspicuity. Quantitative and qualitative data were analyzed during the arterial phase, the enteric phase (which represented peak small-bowel mural enhancement), and the venous phase. Statistical analysis included paired Student t test and Wilcoxon signed rank test with Bonferroni correction. A P value less than .05 was used to indicate a significant difference. RESULTS: The mean time to peak enhancement of the small-bowel wall was 49.3 seconds +/- 7.7 (standard deviation) and 13.5 seconds +/- 7.6 after peak aortic enhancement. Enhancement values were highest during the enteric phase (P < .05). Regarding small-bowel conspicuity, images obtained during the enteric phase were most preferred qualitatively; there was a significant difference between the enteric and arterial phases (P < .001) but not between the enteric and venous phases (P = .18). CONCLUSION: At multi-detector row CT, peak mural enhancement of the normal small bowel occurs on average about 50 seconds after intravenous administration of contrast material or 14 seconds after peak aortic enhancement.
Resumo:
Internal combustion engines are, and will continue to be, a primary mode of power generation for ground transportation. Challenges exist in meeting fuel consumption regulations and emission standards while upholding performance, as fuel prices rise, and resource depletion and environmental impacts are of increasing concern. Diesel engines are advantageous due to their inherent efficiency advantage over spark ignition engines; however, their NOx and soot emissions can be difficult to control and reduce due to an inherent tradeoff. Diesel combustion is spray and mixing controlled providing an intrinsic link between spray and emissions, motivating detailed, fundamental studies on spray, vaporization, mixing, and combustion characteristics under engine relevant conditions. An optical combustion vessel facility has been developed at Michigan Technological University for these studies, with detailed tests and analysis being conducted. In this combustion vessel facility a preburn procedure for thermodynamic state generation is used, and validated using chemical kinetics modeling both for the MTU vessel, and institutions comprising the Engine Combustion Network international collaborative research initiative. It is shown that minor species produced are representative of modern diesel engines running exhaust gas recirculation and do not impact the autoignition of n-heptane. Diesel spray testing of a high-pressure (2000 bar) multi-hole injector is undertaken including non-vaporizing, vaporizing, and combusting tests, with sprays characterized using Mie back scatter imaging diagnostics. Liquid phase spray parameter trends agree with literature. Fluctuations in liquid length about a quasi-steady value are quantified, along with plume to plume variations. Hypotheses are developed for their causes including fuel pressure fluctuations, nozzle cavitation, internal injector flow and geometry, chamber temperature gradients, and turbulence. These are explored using a mixing limited vaporization model with an equation of state approach for thermopyhysical properties. This model is also applied to single and multi-component surrogates. Results include the development of the combustion research facility and validated thermodynamic state generation procedure. The developed equation of state approach provides application for improving surrogate fuels, both single and multi-component, in terms of diesel spray liquid length, with knowledge of only critical fuel properties. Experimental studies are coupled with modeling incorporating improved thermodynamic non-ideal gas and fuel