20 resultados para Simulation and System analysis
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The objective of the Ph.D. thesis is to put the basis of an all-embracing link analysis procedure that may form a general reference scheme for the future state-of-the-art of RF/microwave link design: it is basically meant as a circuit-level simulation of an entire radio link, with – generally multiple – transmitting and receiving antennas examined by EM analysis. In this way the influence of mutual couplings on the frequency-dependent near-field and far-field performance of each element is fully accounted for. The set of transmitters is treated as a unique nonlinear system loaded by the multiport antenna, and is analyzed by nonlinear circuit techniques. In order to establish the connection between transmitters and receivers, the far-fields incident onto the receivers are evaluated by EM analysis and are combined by extending an available Ray Tracing technique to the link study. EM theory is used to describe the receiving array as a linear active multiport network. Link performances in terms of bit error rate (BER) are eventually verified a posteriori by a fast system-level algorithm. In order to validate the proposed approach, four heterogeneous application contexts are provided. A complete MIMO link design in a realistic propagation scenario is meant to constitute the reference case study. The second one regards the design, optimization and testing of various typologies of rectennas for power generation by common RF sources. Finally, the project and implementation of two typologies of radio identification tags, at X-band and V-band respectively. In all the cases the importance of an exhaustive nonlinear/electromagnetic co-simulation and co-design is demonstrated to be essential for any accurate system performance prediction.
Resumo:
The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.
Resumo:
The habenular nuclei are diencephalic structures present in Vertebrates and they form, with the associated fiber systems, a part of the system that connects the telencephalon to the ventral mesencephalon (Concha M. L. and Wilson S. W., 2001). In representative species of almost all classes of Vertebrates the habenular nuclei are asymmetric, both in terms of size and of neuronal and neurochemical organization, although different types of asymmetry follow different evolutionary courses. Previous studies have analyzed the spread and diversity of the asymmetry in species for which data are not clear (Kemali M. et al., 1980). Notwithstanding that, it’s still not totally understood the evolution of the phenomenon, and the ontogenetic mechanisms that have led to the habenular asymmetry development are not clear (Smeets W.J. et al., 1983). For the present study 14 species of Elasmobranchs and 15 species of Teleostean have been used. Brains removed from the animals have been fixed using 4% paraformaldehyde in phosphate buffer; brains have been analyzed with different tecniques, and I used histological, immunohistochemical and ultrastructural analysis to describe this asymmetry. My results confirm data previously obtained studying other Elasmobranchs species, in which the left habenula is larger than the right one; the Teleostean show some slightly differences regarding the size of the habenular ganglia, in some species, in which the left habenular nucleus is larger than the right. In the course of studies, a correlation between the habits of life and the diencephalic asymmetry seems to emerge: among the Teleostean analyzed, the species with benthic life (like Lepidorhombus boscii, Platichthys flesus, Solea vulgaris) seem to possess a slight asymmetry, analogous to the one of the Elasmobranchs, while in the other species (like Liza aurata, Anguilla anguilla, Trisopterus minutus) the habenulae are symmetrical. However, various aspects of the neuroanatomical asymmetries of the epithalamus have not been deepened in order to obtain a complete picture of the evolution of this phenomenon, and new searches are needed to examine the species without clear asymmetry, in order to understand the spread and the diversity of the asymmetry among the habenulae between the Vertebrates.
Resumo:
Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.
Resumo:
The general aim of this work is to contribute to the energy performance assessment of ventilated façades by the simultaneous use of experimental data and numerical simulations. A significant amount of experimental work was done on different types of ventilated façades with natural ventilation. The measurements were taken on a test building. The external walls of this tower are rainscreen ventilated façades. Ventilation grills are located at the top and at the bottom of the tower. In this work the modelling of the test building using a dynamic thermal simulation program (ESP-r) is presented and the main results discussed. In order to investigate the best summer thermal performance of rainscreen ventilated skin façade a study for different setups of rainscreen walls was made. In particular, influences of ventilation grills, air cavity thickness, skin colour, skin material, orientation of façade were investigated. It is shown that some types of rainscreen ventilated façade typologies are capable of lowering the cooling energy demand of a few percent points.
Resumo:
From the institutional point of view, the legal system of IPR (intellectual property right, hereafter, IPR) is one of incentive institutions of innovation and it plays very important role in the development of economy. According to the law, the owner of the IPR enjoy a kind of exclusive right to use his IP(intellectual property, hereafter, IP), in other words, he enjoys a kind of legal monopoly position in the market. How to well protect the IPR and at the same time to regulate the abuse of IPR is very interested topic in this knowledge-orientated market and it is the basic research question in this dissertation. In this paper, by way of comparing study and by way of law and economic analyses, and based on the Austrian Economics School’s theories, the writer claims that there is no any contradiction between the IPR and competition law. However, in this new economy (high-technology industries), there is really probability of the owner of IPR to abuse his dominant position. And with the characteristics of the new economy, such as, the high rates of innovation, “instant scalability”, network externality and lock-in effects, the IPR “will vest the dominant undertakings with the power not just to monopolize the market but to shift such power from one market to another, to create strong barriers to enter and, in so doing, granting the perpetuation of such dominance for quite a long time.”1 Therefore, in order to keep the order of market, to vitalize the competition and innovation, and to benefit the customer, in EU and US, it is common ways to apply the competition law to regulate the IPR abuse. In Austrian Economic School perspective, especially the Schumpeterian theories, the innovation/competition/monopoly and entrepreneurship are inter-correlated, therefore, we should apply the dynamic antitrust model based on the AES theories to analysis the relationship between the IPR and competition law. China is still a developing country with relative not so high ability of innovation. Therefore, at present, to protect the IPR and to make good use of the incentive mechanism of IPR legal system is the first important task for Chinese government to do. However, according to the investigation reports,2 based on their IPR advantage and capital advantage, some multinational companies really obtained the dominant or monopoly market position in some aspects of some industries, and there are some IPR abuses conducted by such multinational companies. And then, the Chinese government should be paying close attention to regulate any IPR abuse. However, how to effectively regulate the IPR abuse by way of competition law in Chinese situation, from the law and economic theories’ perspective, from the legislation perspective, and from the judicial practice perspective, there is a long way for China to go!
Resumo:
Climate change has been acknowledged as a threat to humanity. Most scholars agree that to avert dangerous climate change and to transform economies into low-carbon societies, deep global emission reductions are required by the year 2050. Under the framework of the Kyoto Protocol, the Clean Development Mechanism (CDM) is the only market-based instrument that encourages industrialised countries to pursue emission reductions in developing countries. The CDM aims to pay the incremental finance necessary to operationalize emission reduction projects which are otherwise not financially viable. According to the objectives of the Kyoto Protocol, the CDM should finance projects that are additional to those which would have happened anyway, contribute to sustainable development in the countries hosting the projects, and be cost-effective. To enable the identification of such projects, an institutional framework has been established by the Kyoto Protocol which lays out responsibilities for public and private actors. This thesis examines whether the CDM has achieved these objectives in practice and can thus be considered an effective tool to reduce emissions. To complete this investigation, the book applies economic theory and analyses the CDM from two perspectives. The first perspective is the supply-dimension which answers the question of how, in practice, the CDM system identified additional, cost-effective, sustainable projects and, generated emission reductions. The main contribution of this book is the second perspective, the compliance-dimension, which answers the question of whether industrialised countries effectively used the CDM for compliance with their Kyoto targets. The application of the CDM in the European Union Emissions Trading Scheme (EU ETS) is used as a case-study. Where the analysis identifies inefficiencies within the supply or the compliance dimension, potential improvements of the legal framework are proposed and discussed.
Resumo:
Atmospheric aerosol particles directly impact air quality and participate in controlling the climate system. Organic Aerosol (OA) in general accounts for a large fraction (10–90%) of the global submicron (PM1) particulate mass. Chemometric methods for source identification are used in many disciplines, but methods relying on the analysis of NMR datasets are rarely used in atmospheric sciences. This thesis provides an original application of NMR-based chemometric methods to atmospheric OA source apportionment. The method was tested on chemical composition databases obtained from samples collected at different environments in Europe, hence exploring the impact of a great diversity of natural and anthropogenic sources. We focused on sources of water-soluble OA (WSOA), for which NMR analysis provides substantial advantages compared to alternative methods. Different factor analysis techniques are applied independently to NMR datasets from nine field campaigns of the project EUCAARI and allowed the identification of recurrent source contributions to WSOA in European background troposphere: 1) Marine SOA; 2) Aliphatic amines from ground sources (agricultural activities, etc.); 3) Biomass burning POA; 4) Biogenic SOA from terpene oxidation; 5) “Aged” SOAs, including humic-like substances (HULIS); 6) Other factors possibly including contributions from Primary Biological Aerosol Particles, and products of cooking activities. Biomass burning POA accounted for more than 50% of WSOC in winter months. Aged SOA associated with HULIS was predominant (> 75%) during the spring-summer, suggesting that secondary sources and transboundary transport become more important in spring and summer. Complex aerosol measurements carried out, involving several foreign research groups, provided the opportunity to compare source apportionment results obtained by NMR analysis with those provided by more widespread Aerodyne aerosol mass spectrometers (AMS) techniques that now provided categorization schemes of OA which are becoming a standard for atmospheric chemists. Results emerging from this thesis partly confirm AMS classification and partly challenge it.
Resumo:
In the framework of the micro-CHP (Combined Heat and Power) energy systems and the Distributed Generation (GD) concept, an Integrated Energy System (IES) able to meet the energy and thermal requirements of specific users, using different types of fuel to feed several micro-CHP energy sources, with the integration of electric generators of renewable energy sources (RES), electrical and thermal storage systems and the control system was conceived and built. A 5 kWel Polymer Electrolyte Membrane Fuel Cell (PEMFC) has been studied. Using experimental data obtained from various measurement campaign, the electrical and CHP PEMFC system performance have been determinate. The analysis of the effect of the water management of the anodic exhaust at variable FC loads has been carried out, and the purge process programming logic was optimized, leading also to the determination of the optimal flooding times by varying the AC FC power delivered by the cell. Furthermore, the degradation mechanisms of the PEMFC system, in particular due to the flooding of the anodic side, have been assessed using an algorithm that considers the FC like a black box, and it is able to determine the amount of not-reacted H2 and, therefore, the causes which produce that. Using experimental data that cover a two-year time span, the ageing suffered by the FC system has been tested and analyzed.
Resumo:
The thesis analyses the hydrodynamic induced by an array of Wave energy Converters (WECs), under an experimental and numerical point of view. WECs can be considered an innovative solution able to contribute to the green energy supply and –at the same time– to protect the rear coastal area under marine spatial planning considerations. This research activity essentially rises due to this combined concept. The WEC under exam is a floating device belonging to the Wave Activated Bodies (WAB) class. Experimental data were performed at Aalborg University in different scales and layouts, and the performance of the models was analysed under a variety of irregular wave attacks. The numerical simulations performed with the codes MIKE 21 BW and ANSYS-AQWA. Experimental results were also used to calibrate the numerical parameters and/or to directly been compared to numerical results, in order to extend the experimental database. Results of the research activity are summarized in terms of device performance and guidelines for a future wave farm installation. The device length should be “tuned” based on the local climate conditions. The wave transmission behind the devices is pretty high, suggesting that the tested layout should be considered as a module of a wave farm installation. Indications on the minimum inter-distance among the devices are provided. Furthermore, a CALM mooring system leads to lower wave transmission and also larger power production than a spread mooring. The two numerical codes have different potentialities. The hydrodynamics around single and multiple devices is obtained with MIKE 21 BW, while wave loads and motions for a single moored device are derived from ANSYS-AQWA. Combining the experimental and numerical it is suggested –for both coastal protection and energy production– to adopt a staggered layout, which will maximise the devices density and minimize the marine space required for the installation.
Resumo:
Coastal sand dunes represent a richness first of all in terms of defense from the sea storms waves and the saltwater ingression; moreover these morphological elements constitute an unique ecosystem of transition between the sea and the land environment. The research about dune system is a strong part of the coastal sciences, since the last century. Nowadays this branch have assumed even more importance for two reasons: on one side the born of brand new technologies, especially related to the Remote Sensing, have increased the researcher possibilities; on the other side the intense urbanization of these days have strongly limited the dune possibilities of development and fragmented what was remaining from the last century. This is particularly true in the Ravenna area, where the industrialization united to the touristic economy and an intense subsidence, have left only few dune ridges residual still active. In this work three different foredune ridges, along the Ravenna coast, have been studied with Laser Scanner technology. This research didn’t limit to analyze volume or spatial difference, but try also to find new ways and new features to monitor this environment. Moreover the author planned a series of test to validate data from Terrestrial Laser Scanner (TLS), with the additional aim of finalize a methodology to test 3D survey accuracy. Data acquired by TLS were then applied on one hand to test some brand new applications, such as Digital Shore Line Analysis System (DSAS) and Computational Fluid Dynamics (CFD), to prove their efficacy in this field; on the other hand the author used TLS data to find any correlation with meteorological indexes (Forcing Factors), linked to sea and wind (Fryberger's method) applying statistical tools, such as the Principal Component Analysis (PCA).
Resumo:
The research work reported in this Thesis was held along two main lines of research. The first and main line of research is about the synthesis of heteroaromatic compounds with increasing steric hindrance, with the aim of preparing stable atropisomers. The main tools used for the study of these dynamic systems, as described in the Introduction, are DNMR, coupled with line shape simulation and DFT calculations, aimed to the conformational analysis for the prediction of the geometries and energy barriers to the trasition states. This techniques have been applied to the research projects about: • atropisomers of arylmaleimides; • atropisomers of 4-arylpyrazolo[3,4-b]pyridines; • study of the intramolecular NO2/CO interaction in solution; • study on 2-arylpyridines. Parallel to the main project, in collaboration with other groups, the research line about determination of the absolute configuration was followed. The products, deriving form organocatalytic reactions, in many cases couldn’t be analyzed by means of X-Ray diffraction, making necessary the development of a protocol based on spectroscopic methodologies: NMR, circular dichroism and computational tools (DFT, TD-DFT) have been implemented in this scope. In this Thesis are reported the determination of the absolute configuration of: • substituted 1,2,3,4-tetrahydroquinolines; • compounds from enantioselective Friedel-Crafts alkylation-acetalization cascade of naphthols with α,β-unsaturated cyclic ketones; • substituted 3,4-annulated indoles.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.
Resumo:
This thesis aims to present the ORC technology, its advantages and related problems. In particular, it provides an analysis of ORC waste heat recovery system in different and innovative scenarios, focusing on cases from the biggest to the lowest scale. Both industrial and residential ORC applications are considered. In both applications, the installation of a subcritical and recuperated ORC system is examined. Moreover, heat recovery is considered in absence of an intermediate heat transfer circuit. This solution allow to improve the recovery efficiency, but requiring safety precautions. Possible integrations of ORC systems with renewable sources are also presented and investigated to improve the non-programmable source exploitation. In particular, the offshore oil and gas sector has been selected as a promising industrial large-scale ORC application. From the design of ORC systems coupled with Gas Turbines (GTs) as topper systems, the dynamic behavior of the GT+ORC innovative combined cycles has been analyzed by developing a dynamic model of all the considered components. The dynamic behavior is caused by integration with a wind farm. The electric and thermal aspects have been examined to identify the advantages related to the waste heat recovery system installation. Moreover, an experimental test rig has been realized to test the performance of a micro-scale ORC prototype. The prototype recovers heat from a low temperature water stream, available for instance in industrial or residential waste heat. In the test bench, various sensors have been installed, an acquisitions system developed in Labview environment to completely analyze the ORC behavior. Data collected in real time and corresponding to the system dynamic behavior have been used to evaluate the system performance based on selected indexes. Moreover, various operational steady-state conditions are identified and operation maps are realized for a completely characterization of the system and to detect the optimal operating conditions.
Resumo:
Quantitative imaging in oncology aims at developing imaging biomarkers for diagnosis and prediction of cancer aggressiveness and therapy response before any morphological change become visible. This Thesis exploits Computed Tomography perfusion (CTp) and multiparametric Magnetic Resonance Imaging (mpMRI) for investigating diverse cancer features on different organs. I developed a voxel-based image analysis methodology in CTp and extended its use to mpMRI, for performing precise and accurate analyses at single-voxel level. This is expected to improve reproducibility of measurements and cancer mechanisms’ comprehension and clinical interpretability. CTp has not entered the clinical routine yet, although its usefulness in the monitoring of cancer angiogenesis, due to different perfusion computing methods yielding unreproducible results. Instead, machine learning applications in mpMRI, useful to detect imaging features representative of cancer heterogeneity, are mostly limited to clinical research, because of results’ variability and difficult interpretability, which make clinicians not confident in clinical applications. In hepatic CTp, I investigated whether, and under what conditions, two widely adopted perfusion methods, Maximum Slope (MS) and Deconvolution (DV), could yield reproducible parameters. To this end, I developed signal processing methods to model the first pass kinetics and remove any numerical cause hampering the reproducibility. In mpMRI, I proposed a new approach to extract local first-order features, aiming at preserving spatial reference and making their interpretation easier. In CTp, I found out the cause of MS and DV non-reproducibility: MS and DV represent two different states of the system. Transport delays invalidate MS assumptions and, by correcting MS formulation, I have obtained the voxel-based equivalence of the two methods. In mpMRI, the developed predictive models allowed (i) detecting rectal cancers responding to neoadjuvant chemoradiation showing, at pre-therapy, sparse coarse subregions with altered density, and (ii) predicting clinically significant prostate cancers stemming from the disproportion between high- and low- diffusivity gland components.