930 resultados para Time equivalent approach
Resumo:
2010 Mathematics Subject Classification: 60J80.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
Markov Chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g. depletion index, carrying capacity assessment. Markov Chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.
Resumo:
In this study, the Schwarz Information Criterion (SIC) is applied in order to detect change-points in the time series of surface water quality variables. The application of change-point analysis allowed detecting change-points in both the mean and the variance in series under study. Time variations in environmental data are complex and they can hinder the identification of the so-called change-points when traditional models are applied to this type of problems. The assumptions of normality and uncorrelation are not present in some time series, and so, a simulation study is carried out in order to evaluate the methodology’s performance when applied to non-normal data and/or with time correlation.
H-infinity control design for time-delay linear systems: a rational transfer function based approach
Resumo:
The aim of this paper is to present new results on H-infinity control synthesis for time-delay linear systems. We extend the use of a finite order LTI system, called comparison system to H-infinity analysis and design. Differently from what can be viewed as a common feature of other control design methods available in the literature to date, the one presented here treats time-delay systems control design with classical numeric routines based on Riccati equations arisen from H-infinity theory. The proposed algorithm is simple, efficient and easy to implement. Some examples illustrating state and output feedback design are solved and discussed in order to put in evidence the most relevant characteristic of the theoretical results. Moreover, a practical application involving a 3-DOF networked control system is presented.
An Approach to Manage Reconfigurations and Reduce Area Cost in Hard Real-Time Reconfigurable Systems
Resumo:
This article presents a methodology to build real-time reconfigurable systems that ensure that all the temporal constraints of a set of applications are met, while optimizing the utilization of the available reconfigurable resources. Starting from a static platform that meets all the real-time deadlines, our approach takes advantage of run-time reconfiguration in order to reduce the area needed while guaranteeing that all the deadlines are still met. This goal is achieved by identifying which tasks must be always ready for execution in order to meet the deadlines, and by means of a methodology that also allows reducing the area requirements.
Resumo:
In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.
Resumo:
Objective: Prove that conducting complementary studies at laboratories and imaging studies are unnecessary in irst-time unprovoked seizures, since there is no change in the evolution and prognosis of the disease, as well as the study of our population, the incidence rate and the proportion of our patients that have been studied and given maintenance treatment, so it can be determined whether or not our population should follow the suggestions of the American Academy of Pediatrics and the Spanish Pediatric Association. Methods: An observational study, including patients diagnosed with irst-time unprovoked seizures. They were followed up on by the emergency department and information was collected from their clinical history and compared with the results of the different studies between patients that suffered just one seizure and the ones that had recurrent seizures. Results: Thirty one patients were included, 14 males and 17 females. The average age was 5.5 years old. The 100% of patients were studied, and the groups were compared. The signiicant study was the electroencephalogram (EEG) with a p=0.02 (signiicance p<0.05), incidence of 41%. Conclusions: The study and diagnosis of irst-time unprovoked seizures is based on clinical manifestations. The EEG is important in the study and classiication of unprovoked seizures. Our population has an incidence and recurrence rate similar to that in the bibliography, and for that reason, this study suggests that the diagnostic and therapeutic guidelines of the American Academy of Pediatrics and the Spanish Pediatric Association should be followed.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
Waiting time at an intensive care unity stands for a key feature in the assessment of healthcare quality. Nevertheless, its estimation is a difficult task, not only due to the different factors with intricate relations among them, but also with respect to the available data, which may be incomplete, self-contradictory or even unknown. However, its prediction not only improves the patients’ satisfaction but also enhance the quality of the healthcare being provided. To fulfill this goal, this work aims at the development of a decision support system that allows one to predict how long a patient should remain at an emergency unit, having into consideration all the remarks that were just stated above. It is built on top of a Logic Programming approach to knowledge representation and reasoning, complemented with a Case Base approach to computing.
Resumo:
Görgeyite, K2Ca5(SO4)6··H2O, is a very rare monoclinic double salt found in evaporites related to the slightly more common mineral syngenite. At 1 atmosphere with increasing external temperature from 25 to 150 °C, the following succession of minerals was formed: first gypsum and K2O, followed at 100 °C by görgeyite. Changes in concentration at 150 °C due to evaporation resulted in the formation of syngenite and finally arcanite. Under hydrothermal conditions, the succession is syngenite at 50 °C, followed by görgyeite at 100 and 150 °C. Increasing the synthesis time at 100 °C and 1 atmosphere showed that initially gypsum was formed, later being replaced by görgeyite. Finally görgeyite was replaced by syngenite, indicating that görgeyite is a metastable phase under these conditions. Under hydrothermal conditions, syngenite plus a small amount of gypsum was formed, after two days being replaced by görgeyite. No further changes were observed with increasing time. Pure görgeyite showed elongated crystals approximately 500 to 1000 µ m in length. The infrared and Raman spectra are mainly showing the vibrational modes of the sulfate groups and the crystal water (structural water). Water is characterized by OH-stretching modes at 3526 and 3577 cm–1 , OH-bending modes at 1615 and 1647 cm–1 , and an OH-libration mode at 876 cm–1 . The sulfate 1 mode is weak in the infrared but showed strong bands at 1005 and 1013 cm–1 in the Raman spectrum. The 2 mode also showed strong bands in the Raman spectrum at 433, 440, 457, and 480 cm–1 . The 3 mode is characterized by a complex set of bands in both infrared and Raman spectra around 1150 cm–1 , whereas 4 is found at 650 cm–1.
Resumo:
A prospective, consecutive series of 106 patients receiving endoscopic anterior scoliosis correction. The aim was to analyse changes in radiographic parameters and rib hump in the two years following surgery. Endoscopic anterior scoliosis correction is a level sparing approach, therefore it is important to assess the amount of decompensation which occurs after surgery. All patients received a single anterior rod and vertebral body screws using a standard compression technique. Cleared disc spaces were packed with either mulched femoral head allograft or rib head/iliac crest autograft. Radiographic parameters (major, instrumented, minor Cobb, T5-T12 kyphosis) and rib hump were measured at 2,6,12 and 24 months after surgery. Paired t-tests and Wilcoxon signed ranks tests were used to assess the statistical significant of changes between adjacent time intervals.----- Results: Mean loss of major curve correction from 2 to 24 months after surgery was 4 degrees. Mean loss of rib hump correction was 1.4 degrees. Mean sagittal kyphosis increased from 27 degrees at 2 months to 30.6 degrees at 24 months. Rod fractures and screw-related complications resulted in several degrees less correction than patients without complications, but overall there was no clinically significant decompensation following complications. The study concluded that there are small changes in deformity measures after endoscopic anterior scoliosis surgery, which are statistically significant but not clinically significant.
Resumo:
Developing an effective impact evaluation framework, managing and conducting rigorous impact evaluations, and developing a strong research and evaluation culture within development communication organisations presents many challenges. This is especially so when both the community and organisational context is continually changing and the outcomes of programs are complex and difficult to clearly identify.----- This paper presents a case study from a research project being conducted from 2007-2010 that aims to address these challenges and issues, entitled Assessing Communication for Social Change: A New Agenda in Impact Assessment. Building on previous development communication projects which used ethnographic action research, this project is developing, trailing and rigorously evaluating a participatory impact assessment methodology for assessing the social change impacts of community radio programs in Nepal. This project is a collaboration between Equal Access – Nepal (EAN), Equal Access – International, local stakeholders and listeners, a network of trained community researchers, and a research team from two Australian universities. A key element of the project is the establishment of an organisational culture within EAN that values and supports the impact assessment process being developed, which is based on continuous action learning and improvement. The paper describes the situation related to monitoring and evaluation (M&E) and impact assessment before the project began, in which EAN was often reliant on time-bound studies and ‘success stories’ derived from listener letters and feedback. We then outline the various strategies used in an effort to develop stronger and more effective impact assessment and M&E systems, and the gradual changes that have occurred to date. These changes include a greater understanding of the value of adopting a participatory, holistic, evidence-based approach to impact assessment. We also critically review the many challenges experienced in this process, including:----- • Tension between the pressure from donors to ‘prove’ impacts and the adoption of a bottom-up, participatory approach based on ‘improving’ programs in ways that meet community needs and aspirations.----- • Resistance from the content teams to changing their existing M&E practices and to the perceived complexity of the approach.----- • Lack of meaningful connection between the M&E and content teams.----- • Human resource problems and lack of capacity in analysing qualitative data and reporting results.----- • The contextual challenges, including extreme poverty, wide cultural and linguistic diversity, poor transport and communications infrastructure, and political instability.----- • A general lack of acceptance of the importance of evaluation within Nepal due to accepting everything as fate or ‘natural’ rather than requiring investigation into a problem.
Resumo:
This paper proposes a new approach for delay-dependent robust H-infinity stability analysis and control synthesis of uncertain systems with time-varying delay. The key features of the approach include the introduction of a new Lyapunov–Krasovskii functional, the construction of an augmented matrix with uncorrelated terms, and the employment of a tighter bounding technique. As a result, significant performance improvement is achieved in system analysis and synthesis without using either free weighting matrices or model transformation. Examples are given to demonstrate the effectiveness of the proposed approach.