918 resultados para Time domain simulation tools
Resumo:
La siguiente monografía busca dar una mirada descriptiva a la cultura corporativa y a su relación con el desempeño organizacional desde la perspectiva de las ciencias de la complejidad. Inicialmente presenta una mirada general de la definición de cultura y caracteriza los sistemas complejos para luego proceder a examinar como algunos fenómenos de la complejidad se ven reflejados en la cultura, revisando la propuesta de Dolan et al, que proponen los valores como atractores en el desempeño. Adicionalmente se examinan distintas formas y definiciones de desempeño organizacional y se identifican algunos estudios que apuntan a la correlación entre culturas fuertes y desempeño. Sin embargo Gordon & DiTomaso concluyen que no se comprende muy bien cómo funciona la relación más allá de la correlación. Finalmente se concluye que la complejidad presenta una opción para explicar cómo puede funcionar la relación entre cultura y desempeño a través de los valores como un elemento cultural que lleva a la emergencia. Sin embargo queda la incógnita sobre la aplicabilidad de estrategias para implementar lo estudiado en organizaciones y en el uso de herramientas de simulación para profundizar en la investigación
Resumo:
Under anthropogenic climate change it is possible that the increased radiative forcing and associated changes in mean climate may affect the “dynamical equilibrium” of the climate system; leading to a change in the relative dominance of different modes of natural variability, the characteristics of their patterns or their behavior in the time domain. Here we use multi-century integrations of version three of the Hadley Centre atmosphere model coupled to a mixed layer ocean to examine potential changes in atmosphere-surface ocean modes of variability. After first evaluating the simulated modes of Northern Hemisphere winter surface temperature and geopotential height against observations, we examine their behavior under an idealized equilibrium doubling of atmospheric CO2. We find no significant changes in the order of dominance, the spatial patterns or the associated time series of the modes. Having established that the dynamic equilibrium is preserved in the model on doubling of CO2, we go on to examine the temperature pattern of mean climate change in terms of the modes of variability; the motivation being that the pattern of change might be explicable in terms of changes in the amount of time the system resides in a particular mode. In addition, if the two are closely related, we might be able to assess the relative credibility of different spatial patterns of climate change from different models (or model versions) by assessing their representation of variability. Significant shifts do appear to occur in the mean position of residence when examining a truncated set of the leading order modes. However, on examining the complete spectrum of modes, it is found that the mean climate change pattern is close to orthogonal to all of the modes and the large shifts are a manifestation of this orthogonality. The results suggest that care should be exercised in using a truncated set of variability EOFs to evaluate climate change signals.
Resumo:
The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.
Resumo:
We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.
Resumo:
There is a need for better links between hydrology and ecology, specifically between landscapes and riverscapes to understand how processes and factors controlling the transport and storage of environmental pollution have affected or will affect the freshwater biota. Here we show how the INCA modelling framework, specifically INCA-Sed (the Integrated Catchments model for Sediments) can be used to link sediment delivery from the landscape to sediment changes in-stream. INCA-Sed is a dynamic, process-based, daily time step model. The first complete description of the equations used in the INCA-Sed software (version 1.9.11) is presented. This is followed by an application of INCA-Sed made to the River Lugg (1077 km2) in Wales. Excess suspended sediment can negatively affect salmonid health. The Lugg has a large and potentially threatened population of both Atlantic salmon (Salmo salar) and Brown Trout (Salmo trutta). With the exception of the extreme sediment transport processes, the model satisfactorily simulated both the hydrology and the sediment dynamics in the catchment. Model results indicate that diffuse soil loss is the most important sediment generation process in the catchment. In the River Lugg, the mean annual Guideline Standard for suspended sediment concentration, proposed by UKTAG, of 25 mg l− 1 is only slightly exceeded during the simulation period (1995–2000), indicating only minimal effect on the Atlantic salmon population. However, the daily time step simulation of INCA-Sed also allows the investigation of the critical spawning period. It shows that the sediment may have a significant negative effect on the fish population in years with high sediment runoff. It is proposed that the fine settled particles probably do not affect the salmonid egg incubation process, though suspended particles may damage the gills of fish and make the area unfavourable for spawning if the conditions do not improve.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
We discuss the use of pulse shaping for optimal excitation of samples in time-domain THz spectroscopy. Pulse shaping can be performed in a 4f optical system to specifications from state space models of the system's dynamics. Subspace algorithms may be used for the identification of the state space models.
Resumo:
We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.
Classification of lactose and mandelic acid THz spectra using subspace and wavelet-packet algorithms
Resumo:
This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.
Resumo:
This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We provide a system identification framework for the analysis of THz-transient data. The subspace identification algorithm for both deterministic and stochastic systems is used to model the time-domain responses of structures under broadband excitation. Structures with additional time delays can be modelled within the state-space framework using additional state variables. We compare the numerical stability of the commonly used least-squares ARX models to that of the subspace N4SID algorithm by using examples of fourth-order and eighth-order systems under pulse and chirp excitation conditions. These models correspond to structures having two and four modes simultaneously propagating respectively. We show that chirp excitation combined with the subspace identification algorithm can provide a better identification of the underlying mode dynamics than the ARX model does as the complexity of the system increases. The use of an identified state-space model for mode demixing, upon transformation to a decoupled realization form is illustrated. Applications of state-space models and the N4SID algorithm to THz transient spectroscopy as well as to optical systems are highlighted.
Resumo:
A quadratic programming optimization procedure for designing asymmetric apodization windows tailored to the shape of time-domain sample waveforms recorded using a terahertz transient spectrometer is proposed. By artificially degrading the waveforms, the performance of the designed window in both the time and the frequency domains is compared with that of conventional rectangular, triangular (Mertz), and Hamming windows. Examples of window optimization assuming Gaussian functions as the building elements of the apodization window are provided. The formulation is sufficiently general to accommodate other basis functions. (C) 2007 Optical Society of America
Resumo:
The large scale fading of wireless mobile communications links is modelled assuming the mobile receiver motion is described by a dynamic linear system in state-space. The geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A Wiener system subspace identification algorithm in conjunction with polynomial regression is used to identify a model from time-domain estimates of the field intensity assuming a multitude of emitters and an antenna array at the receiver end.
Resumo:
A quasi-optical deembedding technique for characterizing waveguides is demonstrated using wide-band time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time-domain responses were discretized and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an AutoRegressive with eXogenous input (ARX), as well as with a state-space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize both signal distortion, as well as the noise propagating in the ARX and subspace models. The optimal filtering procedure used in the wavelet domain for the recorded time-domain signatures is described in detail. The effect of filtering prior to the identification procedures is elucidated with the aid of pole-zero diagrams. Models derived from measurements of terahertz transients in a precision WR-8 waveguide adjustable short are presented.
Resumo:
A quasi-optical technique for characterizing micromachined waveguides is demonstrated with wideband time-resolved terahertz spectroscopy. A transfer-function representation is adopted for the description of the relation between the signals in the input and output port of the waveguides. The time-domain responses were discretized, and the waveguide transfer function was obtained through a parametric approach in the z domain after describing the system with an autoregressive with exogenous input model. The a priori assumption of the number of modes propagating in the structure was inferred from comparisons of the theoretical with the measured characteristic impedance as well as with parsimony arguments. Measurements for a precision WR-8 waveguide-adjustable short as well as for G-band reduced-height micromachined waveguides are presented. (C) 2003 Optical Society of America.