865 resultados para Time equivalent approach
Resumo:
The H∞ synchronization problem of the master and slave structure of a second-order neutral master-slave systems with time-varying delays is presented in this paper. Delay-dependent sufficient conditions for the design of a delayed output-feedback control are given by Lyapunov-Krasovskii method in terms of a linear matrix inequality (LMI). A controller, which guarantees H∞ synchronization of the master and slave structure using some free weighting matrices, is then developed. A numerical example has been given to show the effectiveness of the method
Resumo:
This document examines the time-series properties of the wage differentials that arise between the public and private sector in Colombia during the sample period 1984 to 2005. We Find conflicting results in unit-root and stationary tests when looking at wage differentials at an aggregate level (such as for men, women or both). However, when we analyse wage differentials at higher levels of disaggregation, treat them jointly as a panel of data, and allow for the presence of potential cross section dependence, there is more supportive evidence for the view that wage differentials are stationary. This implies that although wage differentials do exist, they have not been consistently increasing (or decreasing) over time.
Resumo:
In this paper, investment cost asymmetry is introduced in order to test wheter this kind of asymmetry can account for asymmetries in business cycles. By using a smooth transition function, asymmetric investment cost is modeled and introduced in a canonical RBC model. Simulations of the model with Perturbations Method (PM) are very close to simulations through Parameterized Expectations Algorithm (PEA), which allows the use of the former for the sake of time reduction and computational costs. Both symmetric and asymmetric models were simulated and compared. Deterministic and stochastic impulse-response excersices revealed that it is possible to adequately reproduce asymmetric business cycles by modeling asymmetric investment costs. Simulations also showed that higher order moments are insu_cient to detect asymmetries. Instead, methods such as Generalized Impulse Response Analysis (GIRA) and Nonlinear Econometrics prove to be more e_cient diagnostic tools.
Resumo:
Resumen basado en el de la publicación
Resumo:
The objective of this paper is to introduce a diVerent approach, called the ecological-longitudinal, to carrying out pooled analysis in time series ecological studies. Because it gives a larger number of data points and, hence, increases the statistical power of the analysis, this approach, unlike conventional ones, allows the complementation of aspects such as accommodation of random effect models, of lags, of interaction between pollutants and between pollutants and meteorological variables, that are hardly implemented in conventional approaches. Design—The approach is illustrated by providing quantitative estimates of the short-termeVects of air pollution on mortality in three Spanish cities, Barcelona,Valencia and Vigo, for the period 1992–1994. Because the dependent variable was a count, a Poisson generalised linear model was first specified. Several modelling issues are worth mentioning. Firstly, because the relations between mortality and explanatory variables were nonlinear, cubic splines were used for covariate control, leading to a generalised additive model, GAM. Secondly, the effects of the predictors on the response were allowed to occur with some lag. Thirdly, the residual autocorrelation, because of imperfect control, was controlled for by means of an autoregressive Poisson GAM. Finally, the longitudinal design demanded the consideration of the existence of individual heterogeneity, requiring the consideration of mixed models. Main results—The estimates of the relative risks obtained from the individual analyses varied across cities, particularly those associated with sulphur dioxide. The highest relative risks corresponded to black smoke in Valencia. These estimates were higher than those obtained from the ecological-longitudinal analysis. Relative risks estimated from this latter analysis were practically identical across cities, 1.00638 (95% confidence intervals 1.0002, 1.0011) for a black smoke increase of 10 μg/m3 and 1.00415 (95% CI 1.0001, 1.0007) for a increase of 10 μg/m3 of sulphur dioxide. Because the statistical power is higher than in the individual analysis more interactions were statistically significant,especially those among air pollutants and meteorological variables. Conclusions—Air pollutant levels were related to mortality in the three cities of the study, Barcelona, Valencia and Vigo. These results were consistent with similar studies in other cities, with other multicentric studies and coherent with both, previous individual, for each city, and multicentric studies for all three cities
Resumo:
Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)
Resumo:
La tesis pretende explorar acercamientos computacionalmente confiables y eficientes de contractivo MPC para sistemas de tiempo discreto. Dos tipos de contractivo MPC han sido estudiados: MPC con coacción contractiva obligatoria y MPC con una secuencia contractiva de conjuntos controlables. Las técnicas basadas en optimización convexa y análisis de intervalos son aplicadas para tratar MPC contractivo lineal y no lineal, respectivamente. El análisis de intervalos clásicos es ampliado a zonotopes en la geometría para diseñar un conjunto invariante de control terminal para el modo dual de MPC. También es ampliado a intervalos modales para tener en cuenta la modalidad al calcula de conjuntos controlables robustos con una interpretación semántica clara. Los instrumentos de optimización convexa y análisis de intervalos han sido combinados para mejorar la eficacia de contractive MPC para varias clases de sistemas de tiempo discreto inciertos no lineales limitados. Finalmente, los dos tipos dirigidos de contractivo MPC han sido aplicados para controlar un Torneo de Fútbol de Copa Mundial de Micro Robot (MiroSot) y un Tanque-Reactor de Mezcla Continua (CSTR), respectivamente.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.
Resumo:
Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.
Resumo:
A pot experiment was conducted to test the hypothesis that decomposition of organic matter in sewage sludge and the consequent formation of dissolved organic compounds (DOC) would lead to an increase in the bioavailability of the heavy metals. Two Brown Earth soils, one with clayey loam texture (CL) and the other a loamy sand (LS) were mixed with sewage sludge at rates equivalent to 0, 10 and 50 1 dry sludge ha(-1) and the pots were sown with ryegrass (Lolium perenne L.). The organic matter content and heavy metal availability assessed with soil extractions with 0.05 M CaCl2 were monitored over a residual time of two years, while plant uptake over one year, after addition of the sludge. It was found that the concentrations of Cd and Ni in both the ryegrass and the soil extracts increased slightly but significantly during the first year. In most cases, this increase was most evident especially at the higher sludge application rate (50 t ha(-1)). However, in the second year metal availability reached a plateau. Zinc concentrations in the ryegrass did not show an increase but the CaCl2 extracts increased during the first year. In contrast, organic matter content decreased rapidly in the first months of the first year and much more slowly in the second (total decrease of 16%). The concentrations of DOC increased significantly in the more organic rich CL soil in the course of two years. The pattern followed by the decomposition of organic matter with time and the production of DOC may provide at least a partial explanation for trend towards increased metal availability.
Resumo:
Much of the writing on urban regeneration in the UK has been focused on the types of urban spaces that have been created in city centres. Less has been written about the issue of when the benefits of regeneration could and should be delivered to a range of different interests, and the different time frames that exist in any development area. Different perceptions of time have been reflected in dominant development philosophies in the UK and elsewhere. The trickle-down agendas of the 1980s, for example, were criticised for their focus on the short-term time frames and needs of developers, often at the expense of those of local communities. The recent emergence of sustainability discourses, however, ostensibly changes the time focus of development and promotes a broader concern with new imagined futures. This paper draws on the example of development in Salford Quays, in the North West of England, to argue that more attention needs to be given to the politics of space-time in urban development processes. It begins by discussing the importance and relevance of this approach before turning to the case study and the ways in which the local politics of space-time has influenced development agendas and outcomes. The paper argues that such an approach harbours the potential for more progressive, far-reaching, and sustainable development agendas to be developed and implemented.
Resumo:
A new calibration curve for the conversion of radiocarbon ages to calibrated (cal) ages has been constructed and internationally ratified to replace ImCal98, which extended from 0-24 cal kyr BP (Before Present, 0 cal BP = AD 1950). The new calibration data set for terrestrial samples extends from 0-26 cal kyr BP, but with much higher resolution beyond 11.4 cal kyr BP than ImCal98. Dendrochronologically-dated tree-ring samples cover the period from 0-12.4 cal kyr BP. Beyond the end of the tree rings, data from marine records (corals and foraminifera) are converted to the atmospheric equivalent with a site-specific marine reservoir correction to provide terrestrial calibration from 12.4-26.0 cal kyr BP. A substantial enhancement relative to ImCal98 is the introduction of a coherent statistical approach based on a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The tree-ring data sets, sources of uncertainty, and regional offsets are discussed here. The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine 04) are discussed in brief, but details are presented in Hughen et al. (this issue a). We do not make a recommendation for calibration beyond 26 cal kyr BP at this time; however, potential calibration data sets are compared in another paper (van der Plicht et al., this issue).