796 resultados para Time delay systems
Resumo:
info:eu-repo/semantics/published
Resumo:
Pigeons and other animals soon learn to wait (pause) after food delivery on periodic-food schedules before resuming the food-rewarded response. Under most conditions the steady-state duration of the average waiting time, t, is a linear function of the typical interfood interval. We describe three experiments designed to explore the limits of this process. In all experiments, t was associated with one key color and the subsequent food delay, T, with another. In the first experiment, we compared the relation between t (waiting time) and T (food delay) under two conditions: when T was held constant, and when T was an inverse function of t. The pigeons could maximize the rate of food delivery under the first condition by setting t to a consistently short value; optimal behavior under the second condition required a linear relation with unit slope between t and T. Despite this difference in optimal policy, the pigeons in both cases showed the same linear relation, with slope less than one, between t and T. This result was confirmed in a second parametric experiment that added a third condition, in which T + t was held constant. Linear waiting appears to be an obligatory rule for pigeons. In a third experiment we arranged for a multiplicative relation between t and T (positive feedback), and produced either very short or very long waiting times as predicted by a quasi-dynamic model in which waiting time is strongly determined by the just-preceding food delay.
Resumo:
This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.
Resumo:
This paper describes a methodology for deploying flexible dynamic configuration into embedded systems whilst preserving the reliability advantages of static systems. The methodology is based on the concept of decision points (DP) which are strategically placed to achieve fine-grained distribution of self-management logic to meet application-specific requirements. DP logic can be changed easily, and independently of the host component, enabling self-management behavior to be deferred beyond the point of system deployment. A transparent Dynamic Wrapper mechanism (DW) automatically detects and handles problems arising from the evaluation of self-management logic within each DP and ensures that the dynamic aspects of the system collapse down to statically defined default behavior to ensure safety and correctness despite failures. Dynamic context management contributes to flexibility, and removes the need for design-time binding of context providers and consumers, thus facilitating run-time composition and incremental component upgrade.
Resumo:
This paper describes a highly flexible component architecture, primarily designed for automotive control systems, that supports distributed dynamically- configurable context-aware behaviour. The architecture enforces a separation of design-time and run-time concerns, enabling almost all decisions concerning runtime composition and adaptation to be deferred beyond deployment. Dynamic context management contributes to flexibility. The architecture is extensible, and can embed potentially many different self-management decision technologies simultaneously. The mechanism that implements the run-time configuration has been designed to be very robust, automatically and silently handling problems arising from the evaluation of self- management logic and ensuring that in the worst case the dynamic aspects of the system collapse down to static behavior in totally predictable ways.
Resumo:
The fundamental controls on the initiation and development of gravel-dominated deposits (beaches and barriers) on paraglacial coasts are particle size and shape, sediment supply, storm wave activity (primarily runup), relative sea-level (RSL) change, and terrestrial basement structure (primarily as it affects accommodation space). This paper examines the stochastic basis for barrier organisation as shown by variation in gravel barrier architecture. We recognise punctuated self-organisation of barrier development that is disrupted by short phases of barrier instability. The latter results from positive feedback causing barrier breakdown when sediment supply is exhausted. We examine published typologies for gravel barriers and advocate a consolidated perspective using rate of RSL change and sediment supply. We also consider the temporal variation in controls on barrier development. These are examined in terms of a simple behavioural model (BARCH) for prograding gravel barrier architecture and its sensitivity to such controls. The nature of macroscale (102–103 years) gravel barrier development, including inherited characteristics that influence barrier genesis, as well as forcing from changing RSL, sediment supply, headland control and barrier inertia, is examined in the context of long-surviving barriers along the southern England coastline.
Resumo:
We introduce and characterise time operators for unilateral shifts and exact endomorphisms. The associated shift representation of evolution is related to the spectral representation by a generalized Fourier transform. We illustrate the results for a simple exact system, namely the Renyi map.
Resumo:
The future convergence of voice, video and data applications on the Internet requires that next generation technology provides bandwidth and delay guarantees. Current technology trends are moving towards scalable aggregate-based systems where applications are grouped together and guarantees are provided at the aggregate level only. This solution alone is not enough for interactive video applications with sub-second delay bounds. This paper introduces a novel packet marking scheme that controls the end-to-end delay of an individual flow as it traverses a network enabled to supply aggregate- granularity Quality of Service (QoS). IPv6 Hop-by-Hop extension header fields are used to track the packet delay encountered at each network node and autonomous decisions are made on the best queuing strategy to employ. The results of network simulations are presented and it is shown that when the proposed mechanism is employed the requested delay bound is met with a 20% reduction in resource reservation and no packet loss in the network.
Resumo:
This paper investigates a queuing system for QoS optimization of multimedia traffic consisting of aggregated streams with diverse QoS requirements transmitted to a mobile terminal over a common downlink shared channel. The queuing system, proposed for buffer management of aggregated single-user traffic in the base station of High-Speed Downlink Packet Access (HSDPA), allows for optimum loss/delay/jitter performance for end-user multimedia traffic with delay-tolerant non-real-time streams and partially loss tolerant real-time streams. In the queuing system, the real-time stream has non-preemptive priority in service but the number of the packets in the system is restricted by a constant. The non-real-time stream has no service priority but is allowed unlimited access to the system. Both types of packets arrive in the stationary Poisson flow. Service times follow general distribution depending on the packet type. Stability condition for the model is derived. Queue length distribution for both types of customers is calculated at arbitrary epochs and service completion epochs. Loss probability for priority packets is computed. Waiting time distribution in terms of Laplace-Stieltjes transform is obtained for both types of packets. Mean waiting time and jitter are computed. Numerical examples presented demonstrate the effectiveness of the queuing system for QoS optimization of buffered end-user multimedia traffic with aggregated real-time and non-real-time streams.
Resumo:
The hybrid test method is a relatively recently developed dynamic testing technique that uses numerical modelling combined with simultaneous physical testing. The concept of substructuring allows the critical or highly nonlinear part of the structure that is difficult to numerically model with accuracy to be physically tested whilst the remainder of the structure, that has a more predictable response, is numerically modelled. In this paper, a substructured soft-real time hybrid test is evaluated as an accurate means of performing seismic tests of complex structures. The structure analysed is a three-storey, two-by-one bay concentrically braced frame (CBF) steel structure subjected to seismic excitation. A ground storey braced frame substructure whose response is critical to the overall response of the structure is tested, whilst the remainder of the structure is numerically modelled. OpenSees is used for numerical modelling and OpenFresco is used for the communication between the test equipment and numerical model. A novel approach using OpenFresco to define the complex numerical substructure of an X-braced frame within a hybrid test is also presented. The results of the hybrid tests are compared to purely numerical models using OpenSees and a simulated test using a combination of OpenSees and OpenFresco. The comparative results indicate that the test method provides an accurate and cost effective procedure for performing
full scale seismic tests of complex structural systems.