791 resultados para Nonlinear time-delay systems
Resumo:
Considering the fact, in the real world, that information is transmitted with a time delay, we study an evolutionary spatial prisoner's dilemma game where agents update strategies according to certain information that they have learned. In our study, the game dynamics are classified by the modes of information learning as well as game interaction, and four different combinations, i.e. the mean-field case, case I, case II and local case, are studied comparatively. It is found that the time delay in case II smoothes the phase transition from the absorbing states of C (or D) to their mixing state, and promotes cooperation for most parameter values. Our work provides insights into the temporal behavior of information and the memory of the system, and may be helpful in understanding the cooperative behavior induced by the time delay in social and biological systems.
Resumo:
本文将Smith预估技术与逆 Nyquist 阵列法结合对多时延多变量对象进行离散控制系统设计,采用这种方法设计出的控制器易用计算机实现,系统仿真结果也是令人满意的。
Resumo:
To pick velocity automatically is not only helpful to improve the efficiency of seismic data process, but also to provide quickly the initial velocity for prestack depth migration. In this thesis, we use the Viterbi algorithm to do automatic picking, but the velocity picked usually is immoderate. By thorough study and analysis, we think that the Viterbi algorithm has the function to do quickly and effectually automatic picking, but the data provided for picking maybe not continuous on derivative of its curved surface, viz., the curved face on velocity spectrum is not slick. Therefore, the velocity picked may include irrational velocity information. To solve the problem above, we develop a new method to filter signal by performing nonlinear transformation of coordinate and filter of function. Here, we call it as Gravity Center Preserved Pulse Compressed Filter (GCPPCF). The main idea to perform the GCPPCF as follows: separating a curve, such as a pulse, to several subsection, calculating the gravity center (coordinate displacement), and then assign the value (density) on the subsection to gravity center. When gravity center departure away from center of its subsection, the value assigned to gravity center is smaller than the actual one, but non other than gravity center anastomoses fully with its subsection center, the assigned value equal to the actual one. By doing so, the curve shape under new coordinate breadthwise narrows down compare to its original one. It is a process of nonlinear transformation of coordinate, due to gravity center changing with the shape of subsection. Furthermore, the gravity function is filter one, because it is a cause of filtering that the value assigned from subsection center to gravity center is obtained by calculating its weight mean of subsetion function. In addition, the filter has the properties of the adaptive time delay changed filter, owing to the weight coefficient used for weight mean also changes with the shape of subsection. In this thesis, the Viterbi algorithm inducted, being applied to auto pick the stack velocity, makes the rule to integral the max velocity spectrum ("energy group") forward and to get the optimal solution in recursion backward. It is a convenient tool to pick automatically velocity. The GCPPCF above not only can be used to preserve the position of peak value and compress the velocity spectrum, but also can be used as adaptive time delay changed filter to smooth object curved line or curved face. We apply it to smooth variable of sequence observed to get a favourable source data ta provide for achieving the final exact resolution. If there is no the adaptive time delay-changed filter to perform optimization, we can't get a finer source data and also can't valid velocity information, moreover, if there is no the Viterbi algorithm to do shortcut searching, we can't pick velocity automatically. Accordingly, combination of both of algorithm is to make an effective method to do automatic picking. We apply the method of automatic picking velocity to do velocity analysis of the wavefield extrapolated. The results calculated show that the imaging effect of deep layer with the wavefield extrapolated was improved dominantly. The GCPPCF above has achieved a good effect in application. It not only can be used to optimize and smooth velocity spectrum, but also can be used to perform a correlated process for other type of signal. The method of automatic picking velocity developed in this thesis has obtained favorable result by applying it to calculate single model, complicated model (Marmousi model) and also the practical data. The results show that it not only has feasibility, but also practicability.
Resumo:
This study sought predictors of mortality in patients aged >or=75 years with a first ST-segment elevation myocardial infarction (STEMI) and evaluated the validity of the GUSTO-I and TIMI risk models. Clinical variables, treatment and mortality data from 433 consecutive patients were collected. Univariable and multivariable logistic regression analyses were applied to identify baseline factors associated with 30-day mortality. Subsequently a model predicting 30-day mortality was created and compared with the performance of the GUSTO-I and TIMI models. After adjustment, a higher Killip class was the most important predictor (OR 16.1; 95% CI 5.7-45.6). Elevated heart rate, longer time delay to admission, hyperglycemia and older age were also associated with increased risk. Patients with hypercholesterolemia had a significantly lower risk (OR 0.46; 95% CI 0.24-0.86). Discrimination (c-statistic 0.79, 95% CI 0.75-0.84) and calibration (Hosmer-Lemeshow 6, p = 0.5) of our model were good. The GUSTO-I and TIMI risk scores produced adequate discrimination within our dataset (c-statistic 0.76, 95% CI 0.71-0.81, and c-statistic 0.77, 95% CI 0.72-0.82, respectively), but calibration was not satisfactory (HL 21.8, p = 0.005 for GUSTO-I, and HL 20.6, p = 0.008 for TIMI). In conclusion, short-term mortality in elderly patients with a first STEMI depends most importantly on initial clinical and hemodynamic status. The GUSTO-I and TIMI models are insufficiently adequate for providing an exact estimate of 30-day mortality risk.
Resumo:
This report summarizes the technical presentations and discussions that took place during RTDB'96: the First International Workshop on Real-Time Databases, which was held on March 7 and 8, 1996 in Newport Beach, California. The main goals of this project were to (1) review recent advances in real-time database systems research, (2) to promote interaction among real-time database researchers and practitioners, and (3) to evaluate the maturity and directions of real-time database technology.
Resumo:
We can recognize objects through receiving continuously huge temporal information including redundancy and noise, and can memorize them. This paper proposes a neural network model which extracts pre-recognized patterns from temporally sequential patterns which include redundancy, and memorizes the patterns temporarily. This model consists of an adaptive resonance system and a recurrent time-delay network. The extraction is executed by the matching mechanism of the adaptive resonance system, and the temporal information is processed and stored by the recurrent network. Simple simulations are examined to exemplify the property of extraction.
Resumo:
We study an optoelectronic time-delay oscillator that displays high-speed chaotic behavior with a flat, broad power spectrum. The chaotic state coexists with a linearly stable fixed point, which, when subjected to a finite-amplitude perturbation, loses stability initially via a periodic train of ultrafast pulses. We derive approximate mappings that do an excellent job of capturing the observed instability. The oscillator provides a simple device for fundamental studies of time-delay dynamical systems and can be used as a building block for ultrawide-band sensor networks.
Resumo:
This paper describes work towards the deployment of flexible self-management into real-time embedded systems. A challenging project which focuses specifically on the development of a dynamic, adaptive automotive middleware is described, and the specific self-management requirements of this project are discussed. These requirements have been identified through the refinement of a wide-ranging set of use cases requiring context-sensitive behaviours. A sample of these use-cases is presented to illustrate the extent of the demands for self-management. The strategy that has been adopted to achieve self-management, based on the use of policies is presented. The embedded and real-time nature of the target system brings the constraints that dynamic adaptation capabilities must not require changes to the run-time code (except during hot update of complete binary modules), adaptation decisions must have low latency, and because the target platforms are resource-constrained the self-management mechanism have low resource requirements (especially in terms of processing and memory). Policy-based computing is thus and ideal candidate for achieving the self-management because the policy itself is loaded at run-time and can be replaced or changed in the future in the same way that a data file is loaded. Policies represent a relatively low complexity and low risk means of achieving self-management, with low run-time costs. Policies can be stored internally in ROM (such as default policies) as well as externally to the system. The architecture of a designed-for-purpose powerful yet lightweight policy library is described. A suitable evaluation platform, supporting the whole life-cycle of feasibility analysis, concept evaluation, development, rigorous testing and behavioural validation has been devised and is described.
Resumo:
A variety of short time delays inserted between pairs of subjects were found to affect their ability to synchronize a musical task. The subjects performed a clapping rhythm together from separate sound-isolated rooms via headphones and without visual contact. One-way time delays between pairs were manipulated electronically in the range of 3 to 78 ms. We are interested in quantifying the envelope of time delay within which two individuals produce synchronous per- formances. The results indicate that there are distinct regimes of mutually coupled behavior, and that `natural time delay'o¨delay within the narrow range associated with travel times across spatial arrangements of groups and ensembleso¨supports the most stable performance. Conditions outside of this envelope, with time delays both below and above it, create characteristic interaction dynamics in the mutually coupled actions of the duo. Trials at extremely short delays (corresponding to unnaturally close proximity) had a tendency to accelerate from anticipation. Synchronization lagged at longer delays (larger than usual physical distances) and produced an increasingly severe deceleration and then deterioration of performed rhythms. The study has implications for music collaboration over the Internet and suggests that stable rhythmic performance can be achieved by `wired ensembles' across distances of thousands of kilometers.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre Em Engenharia Química e Biológica Ramo de processos Químicos
Resumo:
The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Resumo:
Discrete time control systems require sample- and-hold circuits to perform the conversion from digital to analog. Fractional-Order Holds (FROHs) are an interpolation between the classical zero and first order holds and can be tuned to produce better system performance. However, the model of the FROH is somewhat hermetic and the design of the system becomes unnecessarily complicated. This paper addresses the modelling of the FROHs using the concepts of Fractional Calculus (FC). For this purpose, two simple fractional-order approximations are proposed whose parameters are estimated by a genetic algorithm. The results are simple to interpret, demonstrating that FC is a useful tool for the analysis of these devices.
Resumo:
ARINC specification 653-2 describes the interface between application software and underlying middleware in a distributed real-time avionics system. The real-time workload in this system comprises of partitions, where each partition consists of one or more processes. Processes incur blocking and preemption overheads and can communicate with other processes in the system. In this work we develop compositional techniques for automated scheduling of such partitions and processes. At present, system designers manually schedule partitions based on interactions they have with the partition vendors. This approach is not only time consuming, but can also result in under utilization of resources. In contrast, the technique proposed in this paper is a principled approach for scheduling ARINC-653 partitions and therefore should facilitate system integration.
Resumo:
Modelling the fundamental performance limits of wireless sensor networks (WSNs) is of paramount importance to understand the behaviour of WSN under worst case conditions and to make the appropriate design choices. In that direction, this paper contributes with a methodology for modelling cluster tree WSNs with a mobile sink. We propose closed form recurrent expressions for computing the worst case end to end delays, buffering and bandwidth requirements across any source-destination path in the cluster tree assuming error free channel. We show how to apply our theoretical results to the specific case of IEEE 802.15.4/ZigBee WSNs. Finally, we demonstrate the validity and analyze the accuracy of our methodology through a comprehensive experimental study, therefore validating the theoretical results through experimentation.
Resumo:
Mestrado em Computação e Instrumentação Médica