995 resultados para Threshold Systems


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The introduction of delays into ordinary or partial differential equation models is well known to facilitate the production of rich dynamics ranging from periodic solutions through to spatio-temporal chaos. In this paper we consider a class of scalar partial differential equations with a delayed threshold nonlinearity which admits exact solutions for equilibria, periodic orbits and travelling waves. Importantly we show how the spectra of periodic and travelling wave solutions can be determined in terms of the zeros of a complex analytic function. Using this as a computational tool to determine stability we show that delays can have very different effects on threshold systems with negative as opposed to positive feedback. Direct numerical simulations are used to confirm our bifurcation analysis, and to probe some of the rich behaviour possible for mixed feedback.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is increasing recognition that stochastic processes regulate highly predictable patterns of gene expression in developing organisms, but the implications of stochastic gene expression for understanding haploinsufficiency remain largely unexplored. We have used simulations of stochastic gene expression to illustrate that gene copy number and expression deactivation rates are important variables in achieving predictable outcomes. In gene expression systems with non-zero expression deactivation rates, diploid systems had a higher probability of uninterrupted gene expression than haploid systems and were more successful at maintaining gene product above a very low threshold. Systems with relatively rapid expression deactivation rates (unstable gene expression) had more predictable responses to a gradient of inducer than systems with slow or zero expression deactivation rates (stable gene expression), and diploid systems were more predictable than haploid, with or without dosage compensation. We suggest that null mutations of a single allele in a diploid organism could decrease the probability of gene expression and present the hypothesis that some haploinsufficiency syndromes might result from an increased susceptibility to stochastic delays of gene initiation or interruptions of gene expression.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sensory cells usually transmit information to afferent neurons via chemical synapses, in which the level of noise is dependent on an applied stimulus. Taking into account such dependence, we model a sensory system as an array of LIF neurons with a common signal. We show that information transmission is enhanced by a nonzero level of noise. Moreover, we demonstrate a phenomenon similar to suprathreshold stochastic resonance with additive noise. We remark that many properties of information transmission found for the LIF neurons was predicted by us before with simple binary units [Phys. Rev. E 75, 021121 (2007)]. This confirmation of our predictions allows us to point out identical roots of the phenomena found in the simple threshold systems and more complex LIF neurons.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Acknowledgments This paper was developed within the scope of the IRTG 1740/TRP 2011/50151-0, funded by the DFG/FAPESP, and supported by the Government of the Russian Federation (Agreement No. 14.Z50.31.0033 with the Institute of Applied Physics RAS). The first author thanks Dr Roman Ovsyannikov for valuable discussions regarding estimation of the mistake probability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological systems are vulnerable to irreversible change when key system properties are pushed over thresholds, resulting in the loss of resilience and the precipitation of a regime shift. Perhaps the most important of such properties in human-modified landscapes is the total amount of remnant native vegetation. In a seminal study Andren proposed the existence of a fragmentation threshold in the total amount of remnant vegetation, below which landscape-scale connectivity is eroded and local species richness and abundance become dependent on patch size. Despite the fact that species patch-area effects have been a mainstay of conservation science there has yet to be a robust empirical evaluation of this hypothesis. Here we present and test a new conceptual model describing the mechanisms and consequences of biodiversity change in fragmented landscapes, identifying the fragmentation threshold as a first step in a positive feedback mechanism that has the capacity to impair ecological resilience, and drive a regime shift in biodiversity. The model considers that local extinction risk is defined by patch size, and immigration rates by landscape vegetation cover, and that the recovery from local species losses depends upon the landscape species pool. Using a unique dataset on the distribution of non-volant small mammals across replicate landscapes in the Atlantic forest of Brazil, we found strong evidence for our model predictions - that patch-area effects are evident only at intermediate levels of total forest cover, where landscape diversity is still high and opportunities for enhancing biodiversity through local management are greatest. Furthermore, high levels of forest loss can push native biota through an extinction filter, and result in the abrupt, landscape-wide loss of forest-specialist taxa, ecological resilience and management effectiveness. The proposed model links hitherto distinct theoretical approaches within a single framework, providing a powerful tool for analysing the potential effectiveness of management interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the energy system contributions of rowers in three different conditions: rowing on an ergometer without and with the slide and rowing in the water. For this purpose, eight rowers were submitted to 2,000 m race simulations in each of the situations defined above. The fractions of the aerobic (W(AER)), anaerobic alactic (W(PCR)) and anaerobic lactic (W([La-])) systems were calculated based on the oxygen uptake, the fast component of excess post-exercise oxygen uptake and changes in net blood lactate, respectively. In the water, the metabolic work was significantly higher [(851 (82) kJ] than during both ergometer [674 (60) kJ] and ergometer with slide [663 (65) kJ] (P <= 0.05). The time in the water [515 (11) s] was higher (P < 0.001) than in the ergometers with [398 (10) s] and without the slide [402 (15) s], resulting in no difference when relative energy expenditure was considered: in the water [99 (9) kJ min(-1)], ergometer without the slide [99.6 (9) kJ min(-1)] and ergometer with the slide [100.2 (9.6) kJ min(-1)]. The respective contributions of the WAER, WPCR and W[La-] systems were water = 87 (2), 7 (2) and 6 (2)%, ergometer = 84 (2), 7 (2) and 9 (2)%, and ergometer with the slide = 84 (2), 7 (2) and 9 (1)%. (V) over dotO(2), HR and lactate were not different among conditions. These results seem to indicate that the ergometer braking system simulates conditions of a bigger and faster boat and not a single scull. Probably, a 2,500 m test should be used to properly simulate in the water single-scull race.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Walking training is considered as the first treatment option for patients with peripheral arterial disease and intermittent claudication (IC). Walking exercise has been prescribed for these patients by relative intensity of peak oxygen uptake (VO(2)peak), ranging from 40% to 70% VO(2)peak, or pain threshold (PT). However, the relationship between these methods and anaerobic threshold (AT), which is considered one of the best metabolic markers for establishing training intensity, has not been analyzed. Thus, the aim of this study was to compare, in IC patients, the physiological responses at exercise intensities usually prescribed for training (% VO(2) peak or % PT) with the ones observed at AT. METHODS: Thirty-three IC patients performed maximal graded cardiopulmonary treadmill test to assess exercise tolerance. During the test, heart rate (HR), VO(2), and systolic blood pressure were measured and responses were analyzed at the following: 40% of VO(2)peak; 70% of VO(2)peak; AT; and PT. RESULTS: Heart rate and VO(2) at 40% and 70% of VO(2)peak were lower than those at AT (HR: -13 +/- 9% and -3 +/- 8%, P < .01, respectively; VO(2): -52 +/- 12% and -13 +/- 15%, P < .01, respectively). Conversely, HR and VO(2) at PT were slightly higher than those at AT (HR: +3 +/- 8%, P < .01; VO(2): + 6 +/- 15%, P = .04). None of the patients achieved the respiratory compensation point. CONCLUSION: Prescribing exercise for IC patients between 40% and 70% of VO(2)peak will induce a lower stimulus than that at AT, whereas prescribing exercise at PT will result in a stimulus above AT. Thus, prescribing exercise training for IC patients on the basis of PT will probably produce a greater metabolic stimulus, promoting better cardiovascular benefits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we explore the noise characteristics in lithographically-defined two terminal devices containing self-assembled InAs/InP quantum dots. The experimental ensemble of InAs dots show random telegraph noise (RTN) with tuneable relative amplitude-up to 150%-in well defined temperature and source-drain applied voltage ranges. Our numerical simulation indicates that the RTN signature correlates with a very low number of quantum dots acting as effective charge storage centres in the structure for a given applied voltage. The modulation in relative amplitude variation can thus be associated to the altered electrostatic potential profile around such centres and enhanced carrier scattering provided by a charged dot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The objective of this study was to evaluate the stress on the cortical bone around single body dental implants supporting mandibular complete fixed denture with rigid (Neopronto System-Neodent) or semirigid splinting system (Barra Distal System-Neodent). Methods and Materials: Stress levels on several system components were analyzed through finite element analysis. Focusing on stress concentration at cortical bone around single body dental implants supporting mandibular complete fixed dentures with rigid ( Neopronto System-Neodent) or semirigid splinting system ( Barra Distal System-Neodent), after axial and oblique occlusal loading simulation, applied in the last cantilever element. Results: The results showed that semirigid implant splinting generated lower von Mises stress in the cortical bone under axial loading. Rigid implant splinting generated higher von Mises stress in the cortical bone under oblique loading. Conclusion: It was concluded that the use of a semirigid system for rehabilitation of edentulous mandibles by means of immediate implant-supported fixed complete denture is recommended, because it reduces stress concentration in the cortical bone. As a consequence, bone level is better preserved, and implant survival is improved. Nevertheless, for both situations the cortical bone integrity was protected, because the maximum stress level findings were lower than those pointed in the literature as being harmful. The maximum stress limit for cortical bone (167 MPa) represents the threshold between plastic and elastic state for a given material. Because any force is applied to an object, and there is no deformation, we can conclude that the elastic threshold was not surpassed, keeping its structural integrity. If the force is higher than the plastic threshold, the object will suffer permanent deformation. In cortical bone, this represents the beginning of bone resorption and/or remodeling processes, which, according to our simulated loading, would not occur. ( Implant Dent 2010; 19:39-49)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The risk of cardiac events in patients undergoing major noncardiac surgery is dependent on their clinical characteristics and the results of stress testing. The purpose of this study was to develop a composite approach to defining levels of risk and to examine whether different approaches to prophylaxis influenced this prediction of outcome. One hundred forty-five consecutive patients (aged 68 +/- 9 years, 79 men) with >1 clinical risk variable were studied with standard dobutamine-atropine stress echo before major noncardiac surgery. Risk levels were stratified according to the presence of ischemia (new or worsening wall motion abnormality), ischemic threshold (heart rate at development of ischemia), and number of clinical risk variables. Patients were followed for perioperative events (during hospital admission) and death or infarction over the subsequent 16 10 months. Ten perioperative events occurred in 105 patients who proceeded to surgery (10%, 95% confidence interval [CI] 5% to 17%), 40 being cancelled because of cardiac or other risk. No ischemia was identified in 56 patients, 1 of whom (1.8%) had a perioperative infarction. Of the 49 patients with ischemia, 22 (45%) had 1 or 2 clinical risk factors; 2 (9%, 95% CI 1% to 29%) had events. Another 15 patients had a high ischemic threshold and 3 or 4 risk factors; 3 (20%, 95% Cl 4% to 48%) had events. Twelve patients had a low ischemic threshold and 3 or 4 risk factors; 4 (33%, 95% CI 10% to 65%) had events. Preoperative myocardial revascularization was performed in only 3 patients, none of whom had events. Perioperative and long-term events occurred despite the use of beta blockers; 7 of 41 eta blocker-treated patients had a perioperative event (17%, 95% CI 7% to 32%); these treated patients were at higher anticipated risk than untreated patients (20 +/- 24% vs 10 +/- 19%, p = 0.02). The total event rate over late follow-up was 13%, and was predicted by dobutamine-atropine stress echo results and heart rate response. (C) 2002 by Excerpta Medica, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secondary accident statistics can be useful for studying the impact of traffic incident management strategies. An easy-to-implement methodology is presented for classifying secondary accidents using data fusion of a police accident database with intranet incident reports. A current method for classifying secondary accidents uses a static threshold that represents the spatial and temporal region of influence of the primary accident, such as two miles and one hour. An accident is considered secondary if it occurs upstream from the primary accident and is within the duration and queue of the primary accident. However, using the static threshold may result in both false positives and negatives because accident queues are constantly varying. The methodology presented in this report seeks to improve upon this existing method by making the threshold dynamic. An incident progression curve is used to mark the end of the queue throughout the entire incident. Four steps in the development of incident progression curves are described. Step one is the processing of intranet incident reports. Step two is the filling in of incomplete incident reports. Step three is the nonlinear regression of incident progression curves. Step four is the merging of individual incident progression curves into one master curve. To illustrate this methodology, 5,514 accidents from Missouri freeways were analyzed. The results show that secondary accidents identified by dynamic versus static thresholds can differ by more than 30%.