54 resultados para Time domain simulation tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research examines dynamics associated with new representational technologies in complex organizations through a study of the use of a Single Model Environment, prototyping and simulation tools in the mega-project to construct Terminal 5 at Heathrow Airport, London. The ambition of the client, BAA. was to change industrial practices reducing project costs and time to delivery through new contractual arrangements and new digitally-enabled collaborative ways of working. The research highlights changes over time and addresses two areas of 'turbulence' in the use of: 1) technologies, where there is a dynamic tension between desires to constantly improve, change and update digital technologies and the need to standardise practices, maintaining and defending the overall integrity of the system; and 2) representations, where dynamics result from the responsibilities and liabilities associated with sharing of digital representations and a lack of trust in the validity of data from other firms. These dynamics are tracked across three stages of this well-managed and innovative project and indicate the generic need to treat digital infrastructure as an ongoing strategic issue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter aims to provide an overview of building simulation in a theoretical and practical context. The following sections demonstrate the importance of simulation programs at a time when society is shifting towards a low carbon future and the practice of sustainable design becomes mandatory. The initial sections acquaint the reader with basic terminology and comment on the capabilities and categories of simulation tools before discussing the historical development of programs. The main body of the chapter considers the primary benefits and users of simulation programs, looks at the role of simulation in the construction process and examines the validity and interpretation of simulation results. The latter half of the chapter looks at program selection and discusses software capability, product characteristics, input data and output formats. The inclusion of a case study demonstrates the simulation procedure and key concepts. Finally, the chapter closes with a sight into the future, commenting on the development of simulation capability, user interfaces and how simulation will continue to empower building professionals as society faces new challenges in a rapidly changing landscape.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-carrier (SC) block transmission with frequency-domain equalisation (FDE) offers a viable transmission technology for combating the adverse effects of long dispersive channels encountered in high-rate broadband wireless communication systems. However, for high bandwidthefficiency and high power-efficiency systems, the channel can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such nonlinear Hammerstein channels, the standard SC-FDE scheme no longer works. This paper advocates a complex-valued (CV) B-spline neural network based nonlinear SC-FDE scheme for Hammerstein channels. Specifically, We model the nonlinear HPA, which represents the CV static nonlinearity of the Hammerstein channel, by a CV B-spline neural network, and we develop two efficient alternating least squares schemes for estimating the parameters of the Hammerstein channel, including both the channel impulse response coefficients and the parameters of the CV B-spline model. We also use another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse B-spline neural network model obtained in time domain. Extensive simulation results are included to demonstrate the effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A practical single-carrier (SC) block transmission with frequency domain equalisation (FDE) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such Hammerstein channels, the standard SC-FDE scheme no longer works. We propose a novel Bspline neural network based nonlinear SC-FDE scheme for Hammerstein channels. In particular, we model the nonlinear HPA, which represents the complex-valued static nonlinearity of the Hammerstein channel, by two real-valued B-spline neural networks, one for modelling the nonlinear amplitude response of the HPA and the other for the nonlinear phase response of the HPA. We then develop an efficient alternating least squares algorithm for estimating the parameters of the Hammerstein channel, including the channel impulse response coefficients and the parameters of the two B-spline models. Moreover, we also use another real-valued B-spline neural network to model the inversion of the HPA’s nonlinear amplitude response, and the parameters of this inverting B-spline model can be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse Bspline neural network model obtained in time domain. The effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels is demonstrated in a simulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the feasibility of wireless terahertz communications links deployed in a metropolitan area and model the large-scale fading of such channels. The model takes into account reception through direct line of sight, ground and wall reflection, as well as diffraction around a corner. The movement of the receiver is modeled by an autonomous dynamic linear system in state space, whereas the geometric relations involved in the attenuation and multipath propagation of the electric field are described by a static nonlinear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a single-output Wiener model from time-domain measurements of the field intensity when the receiver motion is simulated using a constant angular speed and an exponentially decaying radius. The identification procedure is validated by using the model to perform q-step ahead predictions. The sensitivity of the algorithm to small-scale fading, detector noise, and atmospheric changes are discussed. The performance of the algorithm is tested in the diffraction zone assuming a range of emitter frequencies (2, 38, 60, 100, 140, and 400 GHz). Extensions of the simulation results to situations where a more complicated trajectory describes the motion of the receiver are also implemented, providing information on the performance of the algorithm under a worst case scenario. Finally, a sensitivity analysis to model parameters for the identified Wiener system is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that an analysis of the mean and variance of discrete wavelet coefficients of coaveraged time-domain interferograms can be used as a specification for determining when to stop coaveraging. We also show that, if a prediction model built in the wavelet domain is used to determine the composition of unknown samples, a stopping criterion for the coaveraging process can be developed with respect to the uncertainty tolerated in the prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under anthropogenic climate change it is possible that the increased radiative forcing and associated changes in mean climate may affect the “dynamical equilibrium” of the climate system; leading to a change in the relative dominance of different modes of natural variability, the characteristics of their patterns or their behavior in the time domain. Here we use multi-century integrations of version three of the Hadley Centre atmosphere model coupled to a mixed layer ocean to examine potential changes in atmosphere-surface ocean modes of variability. After first evaluating the simulated modes of Northern Hemisphere winter surface temperature and geopotential height against observations, we examine their behavior under an idealized equilibrium doubling of atmospheric CO2. We find no significant changes in the order of dominance, the spatial patterns or the associated time series of the modes. Having established that the dynamic equilibrium is preserved in the model on doubling of CO2, we go on to examine the temperature pattern of mean climate change in terms of the modes of variability; the motivation being that the pattern of change might be explicable in terms of changes in the amount of time the system resides in a particular mode. In addition, if the two are closely related, we might be able to assess the relative credibility of different spatial patterns of climate change from different models (or model versions) by assessing their representation of variability. Significant shifts do appear to occur in the mean position of residence when examining a truncated set of the leading order modes. However, on examining the complete spectrum of modes, it is found that the mean climate change pattern is close to orthogonal to all of the modes and the large shifts are a manifestation of this orthogonality. The results suggest that care should be exercised in using a truncated set of variability EOFs to evaluate climate change signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity inherent in climate data makes it necessary to introduce more than one statistical tool to the researcher to gain insight into the climate system. Empirical orthogonal function (EOF) analysis is one of the most widely used methods to analyze weather/climate modes of variability and to reduce the dimensionality of the system. Simple structure rotation of EOFs can enhance interpretability of the obtained patterns but cannot provide anything more than temporal uncorrelatedness. In this paper, an alternative rotation method based on independent component analysis (ICA) is considered. The ICA is viewed here as a method of EOF rotation. Starting from an initial EOF solution rather than rotating the loadings toward simplicity, ICA seeks a rotation matrix that maximizes the independence between the components in the time domain. If the underlying climate signals have an independent forcing, one can expect to find loadings with interpretable patterns whose time coefficients have properties that go beyond simple noncorrelation observed in EOFs. The methodology is presented and an application to monthly means sea level pressure (SLP) field is discussed. Among the rotated (to independence) EOFs, the North Atlantic Oscillation (NAO) pattern, an Arctic Oscillation–like pattern, and a Scandinavian-like pattern have been identified. There is the suggestion that the NAO is an intrinsic mode of variability independent of the Pacific.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a need for better links between hydrology and ecology, specifically between landscapes and riverscapes to understand how processes and factors controlling the transport and storage of environmental pollution have affected or will affect the freshwater biota. Here we show how the INCA modelling framework, specifically INCA-Sed (the Integrated Catchments model for Sediments) can be used to link sediment delivery from the landscape to sediment changes in-stream. INCA-Sed is a dynamic, process-based, daily time step model. The first complete description of the equations used in the INCA-Sed software (version 1.9.11) is presented. This is followed by an application of INCA-Sed made to the River Lugg (1077 km2) in Wales. Excess suspended sediment can negatively affect salmonid health. The Lugg has a large and potentially threatened population of both Atlantic salmon (Salmo salar) and Brown Trout (Salmo trutta). With the exception of the extreme sediment transport processes, the model satisfactorily simulated both the hydrology and the sediment dynamics in the catchment. Model results indicate that diffuse soil loss is the most important sediment generation process in the catchment. In the River Lugg, the mean annual Guideline Standard for suspended sediment concentration, proposed by UKTAG, of 25 mg l− 1 is only slightly exceeded during the simulation period (1995–2000), indicating only minimal effect on the Atlantic salmon population. However, the daily time step simulation of INCA-Sed also allows the investigation of the critical spawning period. It shows that the sediment may have a significant negative effect on the fish population in years with high sediment runoff. It is proposed that the fine settled particles probably do not affect the salmonid egg incubation process, though suspended particles may damage the gills of fish and make the area unfavourable for spawning if the conditions do not improve.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the use of pulse shaping for optimal excitation of samples in time-domain THz spectroscopy. Pulse shaping can be performed in a 4f optical system to specifications from state space models of the system's dynamics. Subspace algorithms may be used for the identification of the state space models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We model the large scale fading of wireless THz communications links deployed in a metropolitan area taking into account reception through direct line of sight, ground or wall reflection and diffraction. The movement of the receiver in the three dimensions is modelled by an autonomous dynamic linear system in state-space whereas the geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a Wiener model from time-domain measurements of the field intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.