918 resultados para Input-Output Modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper is proposed and analyzed a digital hysteresis modulation using a FPGA (Field Programmable Gate Array) device and VHDL (Hardware Description Language), applied at a hybrid three-phase rectifier with almost unitary input power factor, composed by parallel SEPIC controlled single-phase rectifiers connected to each leg of a standard 6-pulses uncontrolled diode rectifier. The digital control allows a programmable THD (Total Harmonic Distortion) at the input currents, and it makes possible that the power rating of the switching-mode converters, connected in parallel, can be a small fraction of the total average output power, in order to obtain a compact converter, reduced input current THD and almost unitary input power factor. Finally, the proposed digital control, using a FPGA device and VHDL, offers an important flexibility for the associated control technique, in order to obtain a programmable PFC (Power Factor Correction) hybrid three-phase rectifier, in agreement with the international standards (IEC, and IEEE), which impose limits for the THD of the AC (Alternate Current) line input currents. The proposed strategy is verified by experiments. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Caribbean region remains highly vulnerable to the impacts of climate change. In order to assess the social and economic consequences of climate change for the region, the Economic Commission for Latin America and the Caribbean( ECLAC) has developed a model for this purpose. The model is referred to as the Climate Impact Assessment Model (ECLAC-CIAM) and is a tool that can simultaneously assess multiple sectoral climate impacts specific to the Caribbean as a whole and for individual countries. To achieve this goal, an Integrated Assessment Model (IAM) with a Computable General Equilibrium Core was developed comprising of three modules to be executed sequentially. The first of these modules defines the type and magnitude of economic shocks on the basis of a climate change scenario, the second module is a global Computable General Equilibrium model with a special regional and industrial classification and the third module processes the output of the CGE model to get more disaggregated results. The model has the potential to produce several economic estimates but the current default results include percentage change in real national income for individual Caribbean states which provides a simple measure of welfare impacts. With some modifications, the model can also be used to consider the effects of single sectoral shocks such as (Land, Labour, Capital and Tourism) on the percentage change in real national income. Ultimately, the model is envisioned as an evolving tool for assessing the impact of climate change in the Caribbean and as a guide to policy responses with respect to adaptation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with two important research aspects concerning radio frequency (RF) microresonators and switches. First, a new approach for compact modeling and simulation of these devices is presented. Then, a combined process flow for their simultaneous fabrication on a SOI substrate is proposed. Compact models for microresonators and switches are extracted by applying mathematical model order reduction (MOR) to the devices finite element (FE) description in ANSYS c° . The behaviour of these devices includes forms of nonlinearities. However, an approximation in the creation of the FE model is introduced, which enables the use of linear model order reduction. Microresonators are modeled with the introduction of transducer elements, which allow for direct coupling of the electrical and mechanical domain. The coupled system element matrices are linearized around an operating point and reduced. The resulting macromodel is valid for small signal analysis around the bias point, such as harmonic pre-stressed analysis. This is extremely useful for characterizing the frequency response of resonators. Compact modelling of switches preserves the nonlinearity of the device behaviour. Nonlinear reduced order models are obtained by reducing the number of nonlinearities in the system and handling them as input to the system. In this way, the system can be reduced using linear MOR techniques and nonlinearities are introduced directly in the reduced order model. The reduction of the number of system nonlinearities implies the approximation of all distributed forces in the model with lumped forces. Both for microresonators and switches, a procedure for matrices extraction has been developed so that reduced order models include the effects of electrical and mechanical pre-stress. The extraction process is fast and can be done automatically from ANSYS binary files. The method has been applied for the simulation of several devices both at devices and circuit level. Simulation results have been compared with full model simulations, and, when available, experimental data. Reduced order models have proven to conserve the accuracy of finite element method and to give a good description of the overall device behaviour, despite the introduced approximations. In addition, simulation is very fast, both at device and circuit level. A combined process-flow for the integrated fabrication of microresonators and switches has been defined. For this purpose, two processes that are optimized for the independent fabrication of these devices are merged. The major advantage of this process is the possibility to create on-chip circuit blocks that include both microresonators and switches. An application is, for example, aswitched filter bank for wireless transceiver. The process for microresonators fabrication is characterized by the use of silicon on insulator (SOI) wafers and on a deep reactive ion etching (DRIE) step for the creation of the vibrating structures in single-crystal silicon and the use of a sacrificial oxide layer for the definition of resonator to electrode distance. The fabrication of switches is characterized by the use of two different conductive layers for the definition of the actuation electrodes and by the use of a photoresist as a sacrificial layer for the creation of the suspended structure. Both processes have a gold electroplating step, for the creation of the resonators electrodes, transmission lines and suspended structures. The combined process flow is designed such that it conserves the basic properties of the original processes. Neither the performance of the resonators nor the performance of the switches results affected by the simultaneous fabrication. Moreover, common fabrication steps are shared, which allows for cheaper and faster fabrication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La distorsione della percezione della distanza tra due stimoli puntuali applicati sulla superfice della pelle di diverse regioni corporee è conosciuta come Illusione di Weber. Questa illusione è stata osservata, e verificata, in molti esperimenti in cui ai soggetti era chiesto di giudicare la distanza tra due stimoli applicati sulla superficie della pelle di differenti parti corporee. Da tali esperimenti si è dedotto che una stessa distanza tra gli stimoli è giudicata differentemente per diverse regioni corporee. Il concetto secondo cui la distanza sulla pelle è spesso percepita in maniera alterata è ampiamente condiviso, ma i meccanismi neurali che manovrano questa illusione sono, allo stesso tempo, ancora ampiamente sconosciuti. In particolare, non è ancora chiaro come sia interpretata la distanza tra due stimoli puntuali simultanei, e quali aree celebrali siano coinvolte in questa elaborazione. L’illusione di Weber può essere spiegata, in parte, considerando la differenza in termini di densità meccano-recettoriale delle differenti regioni corporee, e l’immagine distorta del nostro corpo che risiede nella Corteccia Primaria Somato-Sensoriale (homunculus). Tuttavia, questi meccanismi sembrano non sufficienti a spiegare il fenomeno osservato: infatti, secondo i risultati derivanti da 100 anni di sperimentazioni, le distorsioni effettive nel giudizio delle distanze sono molto più piccole rispetto alle distorsioni che la Corteccia Primaria suggerisce. In altre parole, l’illusione osservata negli esperimenti tattili è molto più piccola rispetto all’effetto prodotto dalla differente densità recettoriale che affligge le diverse parti del corpo, o dall’estensione corticale. Ciò, ha portato a ipotizzare che la percezione della distanza tattile richieda la presenza di un’ulteriore area celebrale, e di ulteriori meccanismi che operino allo scopo di ridimensionare – almeno parzialmente – le informazioni derivanti dalla corteccia primaria, in modo da mantenere una certa costanza nella percezione della distanza tattile lungo la superfice corporea. E’ stata così proposta la presenza di una sorta di “processo di ridimensionamento”, chiamato “Rescaling Process” che opera per ridurre questa illusione verso una percezione più verosimile. Il verificarsi di questo processo è sostenuto da molti ricercatori in ambito neuro scientifico; in particolare, dal Dr. Matthew Longo, neuro scienziato del Department of Psychological Sciences (Birkbeck University of London), le cui ricerche sulla percezione della distanza tattile e sulla rappresentazione corporea sembrano confermare questa ipotesi. Tuttavia, i meccanismi neurali, e i circuiti che stanno alla base di questo potenziale “Rescaling Process” sono ancora ampiamente sconosciuti. Lo scopo di questa tesi è stato quello di chiarire la possibile organizzazione della rete, e i meccanismi neurali che scatenano l’illusione di Weber e il “Rescaling Process”, usando un modello di rete neurale. La maggior parte del lavoro è stata svolta nel Dipartimento di Scienze Psicologiche della Birkbeck University of London, sotto la supervisione del Dott. M. Longo, il quale ha contribuito principalmente all’interpretazione dei risultati del modello, dando suggerimenti sull’elaborazione dei risultati in modo da ottenere un’informazione più chiara; inoltre egli ha fornito utili direttive per la validazione dei risultati durante l’implementazione di test statistici. Per replicare l’illusione di Weber ed il “Rescaling Proess”, la rete neurale è stata organizzata con due strati principali di neuroni corrispondenti a due differenti aree funzionali corticali: • Primo strato di neuroni (il quale dà il via ad una prima elaborazione degli stimoli esterni): questo strato può essere pensato come parte della Corteccia Primaria Somato-Sensoriale affetta da Magnificazione Corticale (homunculus). • Secondo strato di neuroni (successiva elaborazione delle informazioni provenienti dal primo strato): questo strato può rappresentare un’Area Corticale più elevata coinvolta nell’implementazione del “Rescaling Process”. Le reti neurali sono state costruite includendo connessioni sinaptiche all’interno di ogni strato (Sinapsi Laterali), e connessioni sinaptiche tra i due strati neurali (Sinapsi Feed-Forward), assumendo inoltre che l’attività di ogni neurone dipenda dal suo input attraverso una relazione sigmoidale statica, cosi come da una dinamica del primo ordine. In particolare, usando la struttura appena descritta, sono state implementate due differenti reti neurali, per due differenti regioni corporee (per esempio, Mano e Braccio), caratterizzate da differente risoluzione tattile e differente Magnificazione Corticale, in modo da replicare l’Illusione di Weber ed il “Rescaling Process”. Questi modelli possono aiutare a comprendere il meccanismo dell’illusione di Weber e dare così una possibile spiegazione al “Rescaling Process”. Inoltre, le reti neurali implementate forniscono un valido contributo per la comprensione della strategia adottata dal cervello nell’interpretazione della distanza sulla superficie della pelle. Oltre allo scopo di comprensione, tali modelli potrebbero essere impiegati altresì per formulare predizioni che potranno poi essere verificate in seguito, in vivo, su soggetti reali attraverso esperimenti di percezione tattile. E’ importante sottolineare che i modelli implementati sono da considerarsi prettamente come modelli funzionali e non intendono replicare dettagli fisiologici ed anatomici. I principali risultati ottenuti tramite questi modelli sono la riproduzione del fenomeno della “Weber’s Illusion” per due differenti regioni corporee, Mano e Braccio, come riportato nei tanti articoli riguardanti le illusioni tattili (per esempio “The perception of distance and location for dual tactile pressures” di Barry G. Green). L’illusione di Weber è stata registrata attraverso l’output delle reti neurali, e poi rappresentata graficamente, cercando di spiegare le ragioni di tali risultati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’interazione che abbiamo con l’ambiente che ci circonda dipende sia da diverse tipologie di stimoli esterni che percepiamo (tattili, visivi, acustici, ecc.) sia dalla loro elaborazione per opera del nostro sistema nervoso. A volte però, l’integrazione e l’elaborazione di tali input possono causare effetti d’illusione. Ciò si presenta, ad esempio, nella percezione tattile. Infatti, la percezione di distanze tattili varia al variare della regione corporea considerata. Il concetto che distanze sulla cute siano frequentemente erroneamente percepite, è stato scoperto circa un secolo fa da Weber. In particolare, una determinata distanza fisica, è percepita maggiore su parti del corpo che presentano una più alta densità di meccanocettori rispetto a distanze applicate su parti del corpo con inferiore densità. Oltre a questa illusione, un importante fenomeno osservato in vivo è rappresentato dal fatto che la percezione della distanza tattile dipende dall’orientazione degli stimoli applicati sulla cute. In sostanza, la distanza percepita su una regione cutanea varia al variare dell’orientazione degli stimoli applicati. Recentemente, Longo e Haggard (Longo & Haggard, J.Exp.Psychol. Hum Percept Perform 37: 720-726, 2011), allo scopo di investigare come sia rappresentato il nostro corpo all’interno del nostro cervello, hanno messo a confronto distanze tattili a diverse orientazioni sulla mano deducendo che la distanza fra due stimoli puntuali è percepita maggiore se applicata trasversalmente sulla mano anziché longitudinalmente. Tale illusione è nota con il nome di Illusione Tattile Orientazione-Dipendente e diversi risultati riportati in letteratura dimostrano che tale illusione dipende dalla distanza che intercorre fra i due stimoli puntuali sulla cute. Infatti, Green riporta in un suo articolo (Green, Percpept Pshycophys 31, 315-323, 1982) il fatto che maggiore sia la distanza applicata e maggiore risulterà l’effetto illusivo che si presenta. L’illusione di Weber e l’illusione tattile orientazione-dipendente sono spiegate in letteratura considerando differenze riguardanti la densità di recettori, gli effetti di magnificazione corticale a livello della corteccia primaria somatosensoriale (regioni della corteccia somatosensoriale, di dimensioni differenti, sono adibite a diverse regioni corporee) e differenze nella dimensione e forma dei campi recettivi. Tuttavia tali effetti di illusione risultano molto meno rilevanti rispetto a quelli che ci si aspetta semplicemente considerando i meccanismi fisiologici, elencati in precedenza, che li causano. Ciò suggerisce che l’informazione tattile elaborata a livello della corteccia primaria somatosensoriale, riceva successivi step di elaborazione in aree corticali di più alto livello. Esse agiscono allo scopo di ridurre il divario fra distanza percepita trasversalmente e distanza percepita longitudinalmente, rendendole più simili tra loro. Tale processo assume il nome di “Rescaling Process”. I meccanismi neurali che operano nel cervello allo scopo di garantire Rescaling Process restano ancora largamente sconosciuti. Perciò, lo scopo del mio progetto di tesi è stato quello di realizzare un modello di rete neurale che simulasse gli aspetti riguardanti la percezione tattile, l’illusione orientazione-dipendente e il processo di rescaling avanzando possibili ipotesi circa i meccanismi neurali che concorrono alla loro realizzazione. Il modello computazionale si compone di due diversi layers neurali che processano l’informazione tattile. Uno di questi rappresenta un’area corticale di più basso livello (chiamata Area1) nella quale una prima e distorta rappresentazione tattile è realizzata. Per questo, tale layer potrebbe rappresentare un’area della corteccia primaria somatosensoriale, dove la rappresentazione della distanza tattile è significativamente distorta a causa dell’anisotropia dei campi recettivi e della magnificazione corticale. Il secondo layer (chiamato Area2) rappresenta un’area di più alto livello che riceve le informazioni tattili dal primo e ne riduce la loro distorsione mediante Rescaling Process. Questo layer potrebbe rappresentare aree corticali superiori (ad esempio la corteccia parietale o quella temporale) adibite anch’esse alla percezione di distanze tattili ed implicate nel Rescaling Process. Nel modello, i neuroni in Area1 ricevono informazioni dagli stimoli esterni (applicati sulla cute) inviando quindi informazioni ai neuroni in Area2 mediante sinapsi Feed-forward eccitatorie. Di fatto, neuroni appartenenti ad uno stesso layer comunicano fra loro attraverso sinapsi laterali aventi una forma a cappello Messicano. E’ importante affermare che la rete neurale implementata è principalmente un modello concettuale che non si preme di fornire un’accurata riproduzione delle strutture fisiologiche ed anatomiche. Per questo occorre considerare un livello astratto di implementazione senza specificare un’esatta corrispondenza tra layers nel modello e regioni anatomiche presenti nel cervello. Tuttavia, i meccanismi inclusi nel modello sono biologicamente plausibili. Dunque la rete neurale può essere utile per una migliore comprensione dei molteplici meccanismi agenti nel nostro cervello, allo scopo di elaborare diversi input tattili. Infatti, il modello è in grado di riprodurre diversi risultati riportati negli articoli di Green e Longo & Haggard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a new Artificial Neural Network (ANN) able to predict at once the main parameters representative of the wave-structure interaction processes, i.e. the wave overtopping discharge, the wave transmission coefficient and the wave reflection coefficient. The new ANN has been specifically developed in order to provide managers and scientists with a tool that can be efficiently used for design purposes. The development of this ANN started with the preparation of a new extended and homogeneous database that collects all the available tests reporting at least one of the three parameters, for a total amount of 16’165 data. The variety of structure types and wave attack conditions in the database includes smooth, rock and armour unit slopes, berm breakwaters, vertical walls, low crested structures, oblique wave attacks. Some of the existing ANNs were compared and improved, leading to the selection of a final ANN, whose architecture was optimized through an in-depth sensitivity analysis to the training parameters of the ANN. Each of the selected 15 input parameters represents a physical aspect of the wave-structure interaction process, describing the wave attack (wave steepness and obliquity, breaking and shoaling factors), the structure geometry (submergence, straight or non-straight slope, with or without berm or toe, presence or not of a crown wall), or the structure type (smooth or covered by an armour layer, with permeable or impermeable core). The advanced ANN here proposed provides accurate predictions for all the three parameters, and demonstrates to overcome the limits imposed by the traditional formulae and approach adopted so far by some of the existing ANNs. The possibility to adopt just one model to obtain a handy and accurate evaluation of the overall performance of a coastal or harbor structure represents the most important and exportable result of the work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotope composition of atmospheric carbon monoxide: A modelling study.rnrnThis study aims at an improved understanding of the stable carbon and oxygen isotope composition of the carbon monoxide (CO) in the global atmosphere by means of numerical simulations. At first, a new kinetic chemistry tagging technique for the most complete parameterisation of isotope effects has been introduced into the Modular Earth Submodel System (MESSy) framework. Incorporated into the ECHAM/MESSy Atmospheric Chemistry (EMAC) general circulation model, an explicit treatment of the isotope effects on the global scale is now possible. The expanded model system has been applied to simulate the chemical system containing up to five isotopologues of all carbon- and oxygen-bearing species, which ultimately determine the δ13C, δ18O and Δ17O isotopic signatures of atmospheric CO. As model input, a new stable isotope-inclusive emission inventory for the relevant trace gases has been compiled. The uncertainties of the emission estimates and of the resulting simulated mixing and isotope ratios have been analysed. The simulated CO mixing and stable isotope ratios have been compared to in-situ measurements from ground-based observatories and from the civil-aircraft-mounted CARIBIC−1 measurement platform.rnrnThe systematically underestimated 13CO/12CO ratios of earlier, simplified modelling studies can now be partly explained. The EMAC simulations do not support the inferences of those studies, which suggest for CO a reduced input of the highly depleted in 13C methane oxidation source. In particular, a high average yield of 0.94 CO per reacted methane (CH4) molecule is simulated in the troposphere, to a large extent due to the competition between the deposition and convective transport processes affecting the CH4 to CO reaction chain intermediates. None of the other factors, assumed or disregarded in previous studies, however hypothesised to have the potential in enriching tropospheric CO in 13C, were found significant when explicitly simulated. The inaccurate surface emissions, likely underestimated over East Asia, are responsible for roughly half of the discrepancies between the simulated and observed 13CO in the northern hemisphere (NH), whereas the remote southern hemisphere (SH) compositions suggest an underestimated fractionation during the oxidation of CO by the hydroxyl radical (OH). A reanalysis of the kinetic isotope effect (KIE) in this reaction contrasts the conventional assumption of a mere pressure dependence, and instead suggests an additional temperature dependence of the 13C KIE, which is driven by changes in the partitioning of the reaction exit channels. This result is yet to be confirmed in the laboratory.rnrnApart from 13CO, for the first time the atmospheric distribution of the oxygen mass-independent fractionation (MIF) in CO, Δ17O, has been consistently simulated on the global scale with EMAC. The applicability of Δ17O(CO) observations to unravelling changes in the tropospheric CH4-CO-OH system has been scrutinised, as well as the implications of the ozone (O3) input to the CO isotope oxygen budget. The Δ17O(CO) is confirmed to be the principal signal for the CO photochemical age, thus providing a measure for the OH chiefly involved in the sink of CO. The highly mass-independently fractionated O3 oxygen is estimated to comprise around 2% of the overall tropospheric CO source, which has implications for the δ18O, but less likely for the Δ17O CO budgets. Finally, additional sensitivity simulations with EMAC corroborate the nearly equal net effects of the present-day CH4 and CO burdens in removing tropospheric OH, as well as the large turnover and stability of the abundance of the latter. The simulated CO isotopologues nonetheless hint at a likely insufficient OH regeneration in the NH high latitudes and the upper troposphere / lower stratosphere (UTLS).rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New designs of user input systems have resulted from the developing technologies and specialized user demands. Conventional keyboard and mouse input devices still dominate the input speed, but other input mechanisms are demanded in special application scenarios. Touch screen and stylus input methods have been widely adopted by PDAs and smartphones. Reduced keypads are necessary for mobile phones. A new design trend is exploring the design space in applications requiring single-handed input, even with eyes-free on small mobile devices. This requires as few keys on the input device to make it feasible to operate. But representing many characters with fewer keys can make the input ambiguous. Accelerometers embedded in mobile devices provide opportunities to combine device movements with keys for input signal disambiguation. Recent research has explored its design space for text input. In this dissertation an accelerometer assisted single key positioning input system is developed. It utilizes input device tilt directions as input signals and maps their sequences to output characters and functions. A generic positioning model is developed as guidelines for designing positioning input systems. A calculator prototype and a text input prototype on the 4+1 (5 positions) positioning input system and the 8+1 (9 positions) positioning input system are implemented using accelerometer readings on a smartphone. Users use one physical key to operate and feedbacks are audible. Controlled experiments are conducted to evaluate the feasibility, learnability, and design space of the accelerometer assisted single key positioning input system. This research can provide inspiration and innovational references for researchers and practitioners in the positioning user input designs, applications of accelerometer readings, and new development of standard machine readable sign languages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combustion strategy in a diesel engine has an impact on the emissions, fuel consumption and the exhaust temperatures. The PM mass retained in the CPF is a function of NO2 and PM concentrations in addition to the exhaust temperatures and the flow rates. Thus the engine combustion strategy affects exhaust characteristics which has an impact on the CPF operation and PM mass retained and oxidized. In this report, a process has been developed to simulate the relationship between engine calibration, performance and HC and PM oxidation in the DOC and CPF respectively. Fuel Rail Pressure (FRP) and Start of Injection (SOI) sweeps were carried out at five steady state engine operating conditions. This data, along with data from a previously carried out surrogate HD-FTP cycle [1], was used to create a transfer function model which estimates the engine out emissions, flow rates, temperatures for varied FRP and SOI over a transient cycle. Four different calibrations (test cases) were considered in this study, which were simulated through the transfer function model and the DOC model [1, 2]. The DOC outputs were then input into a model which simulates the NO2 assisted and thermal PM oxidation inside a CPF. Finally, results were analyzed as to how engine calibration impacts the engine fuel consumption, HC oxidation in the DOC and the PM oxidation in the CPF. Also, active regeneration for various test cases was simulated and a comparative analysis of the fuel penalties involved was carried out.