877 resultados para Non-stationary iterative method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stricter environmental policies are shown necessary to ensure an effective pollutant emission control. It is expected for the present year of 2015, that Brazil will assume, at the 21th United Nation's Climate Change Conference (COP21), implementation of commitment to a low carbon economy. This positioning affects the industrial environment, so that is deemed necessary to search for new technologies, less aggressive to the environment, so the adequacies to the new emission policies do not cause a negative effect on production. Almost all of the processes performed in the steel industry demand burning fuel and, therefore, flue gases are sent to the atmosphere. In this present work is discussed the utilization of heat exchangers so, by recovering part of the available heat from the flue gases of certain industrial process, the combustion air is preheated. The combustion air preheat results in less energy requirement, i.e., less need of fuel consumption and, in addition, minor amount of pollutants to be emitted. Due to better fitting to the process, it is studied the utilization of spiral plate heat exchangers. The heat exchanger dimensioning is made by an iterative method implemented in the software Microsoft Excel. Subsequently are analyzed the gains in terms of process's thermal efficiency improvement and the percentage of fuel saving. The latter implies in reduction of the same percentage of greenhouse gases emission

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stricter environmental policies are shown necessary to ensure an effective pollutant emission control. It is expected for the present year of 2015, that Brazil will assume, at the 21th United Nation's Climate Change Conference (COP21), implementation of commitment to a low carbon economy. This positioning affects the industrial environment, so that is deemed necessary to search for new technologies, less aggressive to the environment, so the adequacies to the new emission policies do not cause a negative effect on production. Almost all of the processes performed in the steel industry demand burning fuel and, therefore, flue gases are sent to the atmosphere. In this present work is discussed the utilization of heat exchangers so, by recovering part of the available heat from the flue gases of certain industrial process, the combustion air is preheated. The combustion air preheat results in less energy requirement, i.e., less need of fuel consumption and, in addition, minor amount of pollutants to be emitted. Due to better fitting to the process, it is studied the utilization of spiral plate heat exchangers. The heat exchanger dimensioning is made by an iterative method implemented in the software Microsoft Excel. Subsequently are analyzed the gains in terms of process's thermal efficiency improvement and the percentage of fuel saving. The latter implies in reduction of the same percentage of greenhouse gases emission

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several diagnostic techniques are presented for the detection of electrical fault in induction motor variable speed drives. These techinques are developed taking into account the impact of the control system on machine variables and non stationary operating conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

- ZUSAMMENFASSUNG:Die vorliegende Dissertation befasst sich mit der Bestimmung der chemischen und physikalischen Eigenschaften von Aerosolpartikeln im Amazonasbecken, die während Zeiten mit Biomasseverbrennung und bei Hintergrundbedingungen bestimmt wurden. Die Messungen wurden während zwei Kampagnen im Rahmen des europäischen Beitrags zum LBA-EUSTACH Experiment in Amazonien. Die Daten umfassen Messungen der Anzahlkonzentrationen, Größenverteilungen, optischen Eigenschaften sowie Elementzusammensetzungen und Kohlenstoffgehalte der gesammelten Aerosole. Die Zusammensetzung des Aerosols wies auf folgende drei Quellen hin: natürlichen biogenen, Mineralstaub, und pyrogenes Aerosol. Aller drei Komponenten trugen signifikant zur Extinktion des Sonnenlichts bei. Insgesamt ergab sich eine Steigerung der Meßwerte um ca. das Zehnfache während der Trockenzeit im Vergleich zur Regenzeit, was auf eine massive Einbringung von Rauchpartikeln im Submikrometerbereich in die Atmosphäre während der Trockenzeit zurückzuführen ist. Dementsprechend sank die Einzelstreualbedo von ca. 0,97 auf 0,91. Der Brechungsindex der Aerosolpartikel wurde mit einer neuen iterative Methoden, basierend auf der Mie-Theorie berechnet. Es ergaben sich durchschnittliche Werte von 1,42 – 0,006i für die Regenzeit und 1,41 – 0,013i für die Trockenperiode. Weitere klimatisch relevante Parameterergaben für Hintergrundaerosole und für Aerosole aus Biomasseverbrennung folgende Werte: Asymmetrieparameter von 0,63 ± 0,02 bzw. 0,70 ± 0,03 und Rückstreuungsverhältnisse von 0,12 ± 0,01 bzw. 0,08 ± 0,01. Diese Veränderungen haben das Potential, das regionale und globale Klima über die Variierung der Extinktion der Sonneneinstrahlung als auch der Wolkeneigenschaften zu beeinflussen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis starts showing the main characteristics and application fields of the AlGaN/GaN HEMT technology, focusing on reliability aspects essentially due to the presence of low frequency dispersive phenomena which limit in several ways the microwave performance of this kind of devices. Based on an equivalent voltage approach, a new low frequency device model is presented where the dynamic nonlinearity of the trapping effect is taken into account for the first time allowing considerable improvements in the prediction of very important quantities for the design of power amplifier such as power added efficiency, dissipated power and internal device temperature. An innovative and low-cost measurement setup for the characterization of the device under low-frequency large-amplitude sinusoidal excitation is also presented. This setup allows the identification of the new low frequency model through suitable procedures explained in detail. In this thesis a new non-invasive empirical method for compact electrothermal modeling and thermal resistance extraction is also described. The new contribution of the proposed approach concerns the non linear dependence of the channel temperature on the dissipated power. This is very important for GaN devices since they are capable of operating at relatively high temperatures with high power densities and the dependence of the thermal resistance on the temperature is quite relevant. Finally a novel method for the device thermal simulation is investigated: based on the analytical solution of the tree-dimensional heat equation, a Visual Basic program has been developed to estimate, in real time, the temperature distribution on the hottest surface of planar multilayer structures. The developed solver is particularly useful for peak temperature estimation at the design stage when critical decisions about circuit design and packaging have to be made. It facilitates the layout optimization and reliability improvement, allowing the correct choice of the device geometry and configuration to achieve the best possible thermal performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My work concerns two different systems of equations used in the mathematical modeling of semiconductors and plasmas: the Euler-Poisson system and the quantum drift-diffusion system. The first is given by the Euler equations for the conservation of mass and momentum, with a Poisson equation for the electrostatic potential. The second one takes into account the physical effects due to the smallness of the devices (quantum effects). It is a simple extension of the classical drift-diffusion model which consists of two continuity equations for the charge densities, with a Poisson equation for the electrostatic potential. Using an asymptotic expansion method, we study (in the steady-state case for a potential flow) the limit to zero of the three physical parameters which arise in the Euler-Poisson system: the electron mass, the relaxation time and the Debye length. For each limit, we prove the existence and uniqueness of profiles to the asymptotic expansion and some error estimates. For a vanishing electron mass or a vanishing relaxation time, this method gives us a new approach in the convergence of the Euler-Poisson system to the incompressible Euler equations. For a vanishing Debye length (also called quasineutral limit), we obtain a new approach in the existence of solutions when boundary layers can appear (i.e. when no compatibility condition is assumed). Moreover, using an iterative method, and a finite volume scheme or a penalized mixed finite volume scheme, we numerically show the smallness condition on the electron mass needed in the existence of solutions to the system, condition which has already been shown in the literature. In the quantum drift-diffusion model for the transient bipolar case in one-space dimension, we show, by using a time discretization and energy estimates, the existence of solutions (for a general doping profile). We also prove rigorously the quasineutral limit (for a vanishing doping profile). Finally, using a new time discretization and an algorithmic construction of entropies, we prove some regularity properties for the solutions of the equation obtained in the quasineutral limit (for a vanishing pressure). This new regularity permits us to prove the positivity of solutions to this equation for at least times large enough.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aimed at addressing some of the issues that, at the state of the art, avoid the P300-based brain computer interface (BCI) systems to move from research laboratories to end users’ home. An innovative asynchronous classifier has been defined and validated. It relies on the introduction of a set of thresholds in the classifier, and such thresholds have been assessed considering the distributions of score values relating to target, non-target stimuli and epochs of voluntary no-control. With the asynchronous classifier, a P300-based BCI system can adapt its speed to the current state of the user and can automatically suspend the control when the user diverts his attention from the stimulation interface. Since EEG signals are non-stationary and show inherent variability, in order to make long-term use of BCI possible, it is important to track changes in ongoing EEG activity and to adapt BCI model parameters accordingly. To this aim, the asynchronous classifier has been subsequently improved by introducing a self-calibration algorithm for the continuous and unsupervised recalibration of the subjective control parameters. Finally an index for the online monitoring of the EEG quality has been defined and validated in order to detect potential problems and system failures. This thesis ends with the description of a translational work involving end users (people with amyotrophic lateral sclerosis-ALS). Focusing on the concepts of the user centered design approach, the phases relating to the design, the development and the validation of an innovative assistive device have been described. The proposed assistive technology (AT) has been specifically designed to meet the needs of people with ALS during the different phases of the disease (i.e. the degree of motor abilities impairment). Indeed, the AT can be accessed with several input devices either conventional (mouse, touchscreen) or alterative (switches, headtracker) up to a P300-based BCI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dynamic character of proteins strongly influences biomolecular recognition mechanisms. With the development of the main models of ligand recognition (lock-and-key, induced fit, conformational selection theories), the role of protein plasticity has become increasingly relevant. In particular, major structural changes concerning large deviations of protein backbones, and slight movements such as side chain rotations are now carefully considered in drug discovery and development. It is of great interest to identify multiple protein conformations as preliminary step in a screening campaign. Protein flexibility has been widely investigated, in terms of both local and global motions, in two diverse biological systems. On one side, Replica Exchange Molecular Dynamics has been exploited as enhanced sampling method to collect multiple conformations of Lactate Dehydrogenase A (LDHA), an emerging anticancer target. The aim of this project was the development of an Ensemble-based Virtual Screening protocol, in order to find novel potent inhibitors. On the other side, a preliminary study concerning the local flexibility of Opioid Receptors has been carried out through ALiBERO approach, an iterative method based on Elastic Network-Normal Mode Analysis and Monte Carlo sampling. Comparison of the Virtual Screening performances by using single or multiple conformations confirmed that the inclusion of protein flexibility in screening protocols has a positive effect on the probability to early recognize novel or known active compounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi è suddivisa in due parti. La prima è dedicata alla determinazione della Deflessione della Verticale (DdV) in Medicina (BO). Vengono presentati tre metodi per la determinazione delle componenti della DdV. Il primo utilizza la livellazione geometrica ed il sistema GNSS, il secondo, eseguito dal dott. Serantoni, utilizza il sistema QDaedalus, messo a punto all' ETH di Zurigo ed il terzo approccio utilizza il programma ConvER, messo a disposizione dalla regione Emilia-Romagna. Nella seconda parte viene presentato un metodo per la determinazione del Coefficiente di Rifrazione Atmosferico (CRA). La procedura di calcolo è di tipo iterativo ed utilizza, oltre agli angoli zenitali, anche le distanze misurate. Il metodo è stato testato in due aree di studio. La prima nella città di Limassol (Cipro) in ambiente urbano nell' autunno 2013. La seconda in Venezia nella laguna durante l'estate 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first chapter, I develop a panel no-cointegration test which extends Pesaran, Shin and Smith (2001)'s bounds test to the panel framework by considering the individual regressions in a Seemingly Unrelated Regression (SUR) system. This allows to take into account unobserved common factors that contemporaneously affect all the units of the panel and provides, at the same time, unit-specific test statistics. Moreover, the approach is particularly suited when the number of individuals of the panel is small relatively to the number of time series observations. I develop the algorithm to implement the test and I use Monte Carlo simulation to analyze the properties of the test. The small sample properties of the test are remarkable, compared to its single equation counterpart. I illustrate the use of the test through a test of Purchasing Power Parity in a panel of EU15 countries. In the second chapter of my PhD thesis, I verify the Expectation Hypothesis of the Term Structure in the repurchasing agreements (repo) market with a new testing approach. I consider an "inexact" formulation of the EHTS, which models a time-varying component in the risk premia and I treat the interest rates as a non-stationary cointegrated system. The effect of the heteroskedasticity is controlled by means of testing procedures (bootstrap and heteroskedasticity correction) which are robust to variance and covariance shifts over time. I fi#nd that the long-run implications of EHTS are verified. A rolling window analysis clarifies that the EHTS is only rejected in periods of turbulence of #financial markets. The third chapter introduces the Stata command "bootrank" which implements the bootstrap likelihood ratio rank test algorithm developed by Cavaliere et al. (2012). The command is illustrated through an empirical application on the term structure of interest rates in the US.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Es sollen hochfeste, gewichtreduzierte Zug- und Tragmittel aus hochmodularen (HM) und hochfesten (HT) Fasern validiert und dabei sowohl runde als auch flache, riemenartige Strukturen untersucht werden. Dadurch sind effizientere Fördersysteme und die Überwindung technischer Grenzen möglich. Darüber hinaus soll das Hauptkriterium für ein breites Anwendungsspektrum geschaffen werden: ein anerkanntes, zerstörungsfreies Prüfverfahren, mit dem der Austausch- bzw. Wartungszeitpunkt des textilen Tragmittels bestimmt werden kann. Können die o. g. Punkte erfolgreich bearbeitet werden, erfolgt eine Ausdehnung der textilen Strukturen in den Bereich kraftübertragender Maschinenelemente. Anhand von Feldversuchen in fördertechnischen Anlagen im Bergbau/ Intralogistik soll erstmals der vollständige Nachweis geführt werden, dass derartige textile Strukturen in technischen Anwendungen eingesetzt werden können. Der Nachweis umfasst die Validierung einer Vielzahl von Einzelschwerpunkten wie die Entwicklung einer Endlos-Herstellungstechnologie bzw. Endverbindung, die Tragmitteldimensionierung, die Erbringung von Festigkeitsnachweisen, die Erarbeitung von Vorschriften und die Erprobung der Verfahren zur Zustandsüberwachung.