18 resultados para Power series models

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Investigation on impulsive signals, originated from Partial Discharge (PD) phenomena, represents an effective tool for preventing electric failures in High Voltage (HV) and Medium Voltage (MV) systems. The determination of both sensors and instruments bandwidths is the key to achieve meaningful measurements, that is to say, obtaining the maximum Signal-To-Noise Ratio (SNR). The optimum bandwidth depends on the characteristics of the system under test, which can be often represented as a transmission line characterized by signal attenuation and dispersion phenomena. It is therefore necessary to develop both models and techniques which can characterize accurately the PD propagation mechanisms in each system and work out the frequency characteristics of the PD pulses at detection point, in order to design proper sensors able to carry out PD measurement on-line with maximum SNR. Analytical models will be devised in order to predict PD propagation in MV apparatuses. Furthermore, simulation tools will be used where complex geometries make analytical models to be unfeasible. In particular, PD propagation in MV cables, transformers and switchgears will be investigated, taking into account both irradiated and conducted signals associated to PD events, in order to design proper sensors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis provides a necessary and sufficient condition for asymptotic efficiency of a nonparametric estimator of the generalised autocovariance function of a Gaussian stationary random process. The generalised autocovariance function is the inverse Fourier transform of a power transformation of the spectral density, and encompasses the traditional and inverse autocovariance functions. Its nonparametric estimator is based on the inverse discrete Fourier transform of the same power transformation of the pooled periodogram. The general result is then applied to the class of Gaussian stationary ARMA processes and its implications are discussed. We illustrate that for a class of contrast functionals and spectral densities, the minimum contrast estimator of the spectral density satisfies a Yule-Walker system of equations in the generalised autocovariance estimator. Selection of the pooling parameter, which characterizes the nonparametric estimator of the generalised autocovariance, controlling its resolution, is addressed by using a multiplicative periodogram bootstrap to estimate the finite-sample distribution of the estimator. A multivariate extension of recently introduced spectral models for univariate time series is considered, and an algorithm for the coefficients of a power transformation of matrix polynomials is derived, which allows to obtain the Wold coefficients from the matrix coefficients characterizing the generalised matrix cepstral models. This algorithm also allows the definition of the matrix variance profile, providing important quantities for vector time series analysis. A nonparametric estimator based on a transformation of the smoothed periodogram is proposed for estimation of the matrix variance profile.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Silicon-based discrete high-power devices need to be designed with optimal performance up to several thousand volts and amperes to reach power ratings ranging from few kWs to beyond the 1 GW mark. To this purpose, a key element is the improvement of the junction termination (JT) since it allows to drastically reduce surface electric field peaks which may lead to an earlier device failure. This thesis will be mostly focused on the negative bevel termination which from several years constitutes a standard processing step in bipolar production lines. A simple methodology to realize its counterpart, a planar JT with variation of the lateral doping concentration (VLD) will be also described. On the JT a thin layer of a semi insulating material is usually deposited, which acts as passivation layer reducing the interface defects and contributing to increase the device reliability. A thorough understanding of how the passivation layer properties affect the breakdown voltage and the leakage current of a fast-recovery diode is fundamental to preserve the ideal termination effect and provide a stable blocking capability. More recently, amorphous carbon, also called diamond-like carbon (DLC), has been used as a robust surface passivation material. By using a commercial TCAD tool, a detailed physical explanation of DLC electrostatic and transport properties has been provided. The proposed approach is able to predict the breakdown voltage and the leakage current of a negative beveled power diode passivated with DLC as confirmed by the successfully validation against the available experiments. In addition, the VLD JT proposed to overcome the limitation of the negative bevel architecture has been simulated showing a breakdown voltage very close to the ideal one with a much smaller area consumption. Finally, the effect of a low junction depth on the formation of current filaments has been analyzed by performing reverse-recovery simulations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessment of brain connectivity among different brain areas during cognitive or motor tasks is a crucial problem in neuroscience today. Aim of this research study is to use neural mass models to assess the effect of various connectivity patterns in cortical EEG power spectral density (PSD), and investigate the possibility to derive connectivity circuits from EEG data. To this end, two different models have been built. In the first model an individual region of interest (ROI) has been built as the parallel arrangement of three populations, each one exhibiting a unimodal spectrum, at low, medium or high frequency. Connectivity among ROIs includes three parameters, which specify the strength of connection in the different frequency bands. Subsequent studies demonstrated that a single population can exhibit many different simultaneous rhythms, provided that some of these come from external sources (for instance, from remote regions). For this reason in the second model an individual ROI is simulated only with a single population. Both models have been validated by comparing the simulated power spectral density with that computed in some cortical regions during cognitive and motor tasks. Another research study is focused on multisensory integration of tactile and visual stimuli in the representation of the near space around the body (peripersonal space). This work describes an original neural network to simulate representation of the peripersonal space around the hands, in basal conditions and after training with a tool used to reach the far space. The model is composed of three areas for each hand, two unimodal areas (visual and tactile) connected to a third bimodal area (visual-tactile), which is activated only when a stimulus falls within the peripersonal space. Results show that the peripersonal space, which includes just a small visual space around the hand in normal conditions, becomes elongated in the direction of the tool after training, thanks to a reinforcement of synapses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My project explores and compares different forms of gender performance in contemporary art and visual culture according to a perspective centered on photography. Thanks to its attesting power this medium can work as a ready-made. In fact during the 20th century it played a key role in the cultural emancipation of the body which (using a Michel Foucault’s expression) has now become «the zero point of the world». Through performance the body proves to be a living material of expression and communication while photography ensures the recording of any ephemeral event that happens in time and space. My questioning approach considers the gender constructed imagery from the 1990s to the present in order to investigate how photography’s strong aura of realism promotes and allows fantasies of transformation. The contemporary fascination with gender (especially for art and fashion) represents a crucial issue in the global context of postmodernity and is manifested in a variety of visual media, from photography to video and film. Moreover the internet along with its digital transmission of images has deeply affected our world (from culture to everyday life) leading to a postmodern preference for performativity over the more traditional and linear forms of narrativity. As a consequence individual borders get redefined by the skin itself which (dissected through instant vision) turns into a ductile material of mutation and hybridation in the service of identity. My critical assumptions are taken from the most relevant changes occurred in philosophy during the last two decades as a result of the contributions by Jacques Lacan, Michel Foucault, Jacques Derrida, Gilles Deleuze who developed a cross-disciplinary and comparative approach to interpret the crisis of modernity. They have profoundly influenced feminist studies so that the category of gender has been reassessed in contrast with sex (as a biological connotation) and in relation to history, culture, society. The ideal starting point of my research is the year 1990. I chose it as the approximate historical moment when the intersection of race, class and gender were placed at the forefront of international artistic production concerned with identity, diversity and globalization. Such issues had been explored throughout the 1970s but it was only from the mid-1980s onward that they began to be articulated more consistently. Published in 1990, the book "Gender trouble: feminism and the subversion of identity" by Judith Butler marked an important breakthrough by linking gender to performance as well as investigating the intricate connections between theory and practice, embodiment and representation. It inspired subsequent research in a variety of disciplines, art history included. In the same year Teresa de Lauretis launched the definition of queer theory to challenge the academic perspective in gay and lesbian studies. In the meantime the rise of Third Wave Feminism in the US introduced a racially and sexually inclusive vision over the global situation in order to reflect on subjectivity, new technologies and popular culture in connection with gender representation. These conceptual tools have enabled prolific readings of contemporary cultural production whether fine arts or mass media. After discussing the appropriate framework of my project and taking into account the postmodern globalization of the visual, I have turned to photography to map gender representation both in art and in fashion. Therefore I have been creating an archive of images around specific topics. I decided to include fashion photography because in the 1990s this genre moved away from the paradigm of an idealized and classical beauty toward a new vernacular allied with lifestyles, art practices, pop and youth culture; as one might expect the dominant narrative modes in fashion photography are now mainly influenced by cinema and snapshot. These strategies originate story lines and interrupted narratives using models’ performance to convey a particular imagery where identity issues emerge as an essential part of fashion spectacle. Focusing on the intersections of gender identities with socially and culturally produced identities, my approach intends to underline how the fashion world has turned to current trends in art photography and in some case turned to the artists themselves. The growing fluidity of the categories that distinguish art from fashion photography represents a particularly fruitful moment of visual exchange. Varying over time the dialogue between these two fields has always been vital; nowadays it can be studied as a result of this close relationship between contemporary art world and consumer culture. Due to the saturation of postmodern imagery the feedback between art and fashion has become much more immediate and then increasingly significant for anyone who wants to investigate the construction of gender identity through performance. In addition to that a lot of magazines founded in the 1990s bridged the worlds of art and fashion because some of their designers and even editors were art-school graduates encouraging innovation. The inclusion of art within such magazines aimed at validating them as a form of art in themselves supporting a dynamic intersection for music, fashion, design and youth culture: an intersection that also contributed to create and spread different gender stereotypes. This general interest in fashion produced many exhibitions of and about fashion itself at major international venues such as the Victoria and Albert Museum in London, the Metropolitan Museum of Art and the Solomon R. Guggenheim Museum in New York. Since then this celebrated success of fashion has been regarded as a typical element of postmodern culture. Owing to that I have also based my analysis on some important exhibitions dealing with gender performance like "Féminin-Masculin" at the Centre Pompidou of Paris (1995), "Rrose is a Rrose is a Rrose. Gender performance in photography" at the Solomon R. Guggenheim Museum of New York (1997), "Global Feminisms" at the Brooklyn Museum (2007), "Female Trouble" at the Pinakothek der Moderne in München together with the workshops dedicated to "Performance: gender and identity" in June 2005 at the Tate Modern of London. Since 2003 in Italy we have had Gender Bender - an international festival held annually in Bologna - to explore the gender imagery stemming from contemporary culture. In few days this festival offers a series of events ranging from visual arts, performance, cinema, literature to conferences and music. Being aware that any method of research is neither race nor gender neutral I have traced these critical paths to question gender identity in a multicultural perspective taking account of the political implications too. In fact, if visibility may be equated with exposure, we can also read these images as points of intersection of visibility with social power. Since gender assignations rely so heavily on the visual, the postmodern dismantling of gender certainty through performance has wide-ranging effects that need to be analyzed. In some sense this practice can even contest the dominance of visual within postmodernism. My visual map in contemporary art and fashion photography includes artists like Nan Goldin, Cindy Sherman, Hellen van Meene, Rineke Dijkstra, Ed Templeton, Ryan McGinley, Anne Daems, Miwa Yanagi, Tracey Moffat, Catherine Opie, Tomoko Sawada, Vanessa Beecroft, Yasumasa Morimura, Collier Schorr among others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obiettivo principale della tesi è lo sviluppo di un modello empirico previsivo di breve periodo che sia in grado di offrire previsioni precise ed affidabili dei consumi di energia elettrica su base oraria del mercato italiano. Questo modello riassume le conoscenze acquisite e l'esperienza fatta durante la mia attuale attività lavorativa presso il Romagna Energia S.C.p.A., uno dei maggiori player italiani del mercato energetico. Durante l'ultimo ventennio vi sono stati drastici cambiamenti alla struttura del mercato elettrico in tutto il mondo. Nella maggior parte dei paesi industrializzati il settore dell'energia elettrica ha modificato la sua originale conformazione di monopolio in mercato competitivo liberalizzato, dove i consumatori hanno la libertà di scegliere il proprio fornitore. La modellazione e la previsione della serie storica dei consumi di energia elettrica hanno quindi assunto un ruolo molto importante nel mercato, sia per i policy makers che per gli operatori. Basandosi sulla letteratura già esistente, sfruttando le conoscenze acquisite 'sul campo' ed alcune intuizioni, si è analizzata e sviluppata una struttura modellistica di tipo triangolare, del tutto innovativa in questo ambito di ricerca, suggerita proprio dal meccanismo fisico attraverso il quale l'energia elettrica viene prodotta e consumata nell'arco delle 24 ore. Questo schema triangolare può essere visto come un particolare modello VARMA e possiede una duplice utilità, dal punto di vista interpretativo del fenomeno da una parte, e previsivo dall'altra. Vengono inoltre introdotti nuovi leading indicators legati a fattori meteorologici, con l'intento di migliorare le performance previsive dello stesso. Utilizzando quindi la serie storica dei consumi di energia elettrica italiana, dall'1 Marzo 2010 al 30 Marzo 2012, sono stati stimati i parametri del modello dello schema previsivo proposto e valutati i risultati previsivi per il periodo dall'1 Aprile 2012 al 30 Aprile 2012, confrontandoli con quelli forniti da fonti ufficiali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semiconductors technologies are rapidly evolving driven by the need for higher performance demanded by applications. Thanks to the numerous advantages that it offers, gallium nitride (GaN) is quickly becoming the technology of reference in the field of power amplification at high frequency. The RF power density of AlGaN/GaN HEMTs (High Electron Mobility Transistor) is an order of magnitude higher than the one of gallium arsenide (GaAs) transistors. The first demonstration of GaN devices dates back only to 1993. Although over the past few years some commercial products have started to be available, the development of a new technology is a long process. The technology of AlGaN/GaN HEMT is not yet fully mature, some issues related to dispersive phenomena and also to reliability are still present. Dispersive phenomena, also referred as long-term memory effects, have a detrimental impact on RF performances and are due both to the presence of traps in the device structure and to self-heating effects. A better understanding of these problems is needed to further improve the obtainable performances. Moreover, new models of devices that take into consideration these effects are necessary for accurate circuit designs. New characterization techniques are thus needed both to gain insight into these problems and improve the technology and to develop more accurate device models. This thesis presents the research conducted on the development of new charac- terization and modelling methodologies for GaN-based devices and on the use of this technology for high frequency power amplifier applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation some novel indices for vulnerability and robustness assessment of power grids are presented. Such indices are mainly defined from the structure of transmission power grids, and with the aim of Blackout (BO) prevention and mitigation. Numerical experiments showing how they could be used alone or in coordination with pre-existing ones to reduce the effects of BOs are discussed. These indices are introduced inside 3 different sujects: The first subject is for taking a look into economical aspects of grids’ operation and their effects in BO propagation. Basically, simulations support that: the determination to operate the grid in the most profitable way could produce an increase in the size or frequency of BOs. Conversely, some uneconomical ways of supplying energy are shown to be less affected by BO phenomena. In the second subject new topological indices are devised to address the question of "which are the best buses to place distributed generation?". The combined use of two indices, is shown as a promising alternative for extracting grid’s significant features regarding robustness against BOs and distributed generation. For this purpose, a new index based on outage shift factors is used along with a previously defined electric centrality index. The third subject is on Static Robustness Analysis of electric networks, from a purely structural point of view. A pair of existing topological indices, (namely degree index and clustering coefficient), are combined to show how degradation of the network structure can be accelerated. Blackout simulations were carried out using the DC Power Flow Method and models of transmission networks from the USA and Europe.