55 resultados para time delay in teleoperation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Constipation is a significant side effect of opioid therapy. We have previously demonstrated that naloxone-3-glucuronide (NX3G) antagonizes the motility-lowering-effect of morphine in the rat colon. AIM: To find out whether oral NX3G is able to reduce the morphine-induced delay in colonic transit time (CTT) without being absorbed and influencing the analgesic effect. METHODS: Fifteen male volunteers were included. Pharmacokinetics: after oral administration of 0.16 mg/kg NX3G, blood samples were collected over a 6-h period. Pharmacodynamics: NX3G or placebo was then given at the start time and every 4 h thereafter. Morphine (0.05 mg/kg) or placebo was injected s.c. 2 h after starting and thereafter every 6 h for 24 h. CTT was measured over a 48-h period by scintigraphy. Pressure pain threshold tests were performed. RESULTS: Neither NX3G nor naloxone was detected in the venous blood. The slowest transit time was observed during the morphine phase, which was significantly different from morphine with NX3G and placebo. The pain perception was not significantly influenced by NX3G. CONCLUSIONS: Orally administered NX3G is able to reverse the morphine-induced delay of CTT in humans without being detected in peripheral blood samples. Therefore, NX3G may improve symptoms of constipation in-patients using opioid medication without affecting opioid-analgesic effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE Faster time from onset to recanalization (OTR) in acute ischemic stroke using endovascular therapy (ET) has been associated with better outcome. However, previous studies were based on less-effective first-generation devices, and analyzed only dichotomized disability outcomes, which may underestimate the full effect of treatment. METHODS In the combined databases of the SWIFT and STAR trials, we identified patients treated with the Solitaire stent retriever with achievement of substantial reperfusion (Thrombolysis in Cerebral Infarction [TICI] 2b-3). Ordinal numbers needed to treat values were derived by populating joint outcome tables. RESULTS Among 202 patients treated with ET with TICI 2b to 3 reperfusion, mean age was 68 (±13), 62% were female, and median National Institutes of Health Stroke Scale (NIHSS) score was 17 (interquartile range [IQR]: 14-20). Day 90 modified Rankin Scale (mRS) outcomes for OTR time intervals ranging from 180 to 480 minutes showed substantial time-related reductions in disability across the entire outcome range. Shorter OTR was associated with improved mean 90-day mRS (1.4 vs. 2.4 vs. 3.3, for OTR groups of 124-240 vs. 241-360 vs. 361-660 minutes; p < 0.001). The number of patients identified as benefitting from therapy with shorter OTR were 3-fold (range, 1.5-4.7) higher on ordinal, compared with dichotomized analysis. For every 15-minute acceleration of OTR, 34 per 1,000 treated patients had improved disability outcome. INTERPRETATION Analysis of disability over the entire outcome range demonstrates a marked effect of shorter time to reperfusion upon improved clinical outcome, substantially higher than binary metrics. For every 5-minute delay in endovascular reperfusion, 1 of 100 patients has a worse disability outcome. Ann Neurol 2015;78:584-593.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early treatment in sepsis may improve outcome. The aim of this study was to evaluate how the delay in starting resuscitation influences the severity of sepsis and the treatment needed to achieve hemodynamic stability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Every day, a substantial proportion of the general population experiences the distressing and frightening signs of an upcoming psychiatric illness. The consequences can be enormous because severe psychiatric disorders typically cause the loss of the ability to work and often mean a long-term burden for both the patients and their families. Even though most developed countries have an exceptionally high density of general practitioners and psychiatrists in private practice, getting a mental health appointment and seeing a doctor is often very difficult for patients with acute psychiatric symptoms. This study aimed at quantifying the time delay involved in seeking medical attendance when psychiatric disorders begin to develop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural dynamic processes correlated over several time scales are found in vivo, in stimulus-evoked as well as spontaneous activity, and are thought to affect the way sensory stimulation is processed. Despite their potential computational consequences, a systematic description of the presence of multiple time scales in single cortical neurons is lacking. In this study, we injected fast spiking and pyramidal (PYR) neurons in vitro with long-lasting episodes of step-like and noisy, in-vivo-like current. Several processes shaped the time course of the instantaneous spike frequency, which could be reduced to a small number (1-4) of phenomenological mechanisms, either reducing (adapting) or increasing (facilitating) the neuron's firing rate over time. The different adaptation/facilitation processes cover a wide range of time scales, ranging from initial adaptation (<10 ms, PYR neurons only), to fast adaptation (<300 ms), early facilitation (0.5-1 s, PYR only), and slow (or late) adaptation (order of seconds). These processes are characterized by broad distributions of their magnitudes and time constants across cells, showing that multiple time scales are at play in cortical neurons, even in response to stationary stimuli and in the presence of input fluctuations. These processes might be part of a cascade of processes responsible for the power-law behavior of adaptation observed in several preparations, and may have far-reaching computational consequences that have been recently described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To study the hypothesis that a delay in the diagnosis of paediatric brain tumours results in decreased survival outcome probability, we compared the prediagnostic period of 315 brain tumour patients (median age 6.7 years, range, 0 to 16 years) with progression-free and overall survival. The median prediagnostic symptomatic interval was 60 days (range, 0 to 3,480 days), with a median parental delay of 14 days (range, 0 to 1,835 days) and a median doctor's delay of 14 days (range, 0 to 3,480 days). The prediagnostic symptomatic interval correlated significantly with the patient age, tumour histology, tumour location and year of diagnosis, but not with gender. We then grouped the patients according to histology (low-grade glioma [n=77], medulloblastoma [n=57], high-grade glioma [n=40], craniopharyngioma [n=27], ependymoma [n=20] and germ cell tumours [n=18]). Contrary to common belief, long prediagnostic symptomatic interval or long doctor's delay did not result in decreased survival outcome probability in any of these groups. The effect of tumour biology on survival seems to be dominant and overwhelms any possible opposing effect on survival of a delay in diagnosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. The evolution of flowering strategies (when and at what size to flower) in monocarpic perennials is determined by balancing current reproduction with expected future reproduction, and these are largely determined by size-specific patterns of growth and survival. However, because of the difficulty in following long-lived individuals throughout their lives, this theory has largely been tested using short-lived species (< 5 years). 2. Here, we tested this theory using the long-lived monocarpic perennial Campanula thyrsoides which can live up to 16 years. We used a novel approach that combined permanent plot and herb chronology data from a 3-year field study to parameterize and validate integral projection models (IPMs). 3. Similar to other monocarpic species, the rosette leaves of C. thyrsoides wither over winter and so size cannot be measured in the year of flowering. We therefore extended the existing IPM framework to incorporate an additional time delay that arises because flowering demography must be predicted from rosette size in the year before flowering. 4. We found that all main demographic functions (growth, survival probability, flowering probability and fecundity) were strongly size-dependent and there was a pronounced threshold size of flowering. There was good agreement between the predicted distribution of flowering ages obtained from the IPMs and that estimated in the field. Mostly, there was good agreement between the IPM predictions and the direct quantitative field measurements regarding the demographic parameters lambda, R-0 and T. We therefore conclude that the model captures the main demographic features of the field populations. 5. Elasticity analysis indicated that changes in the survival and growth function had the largest effect (c. 80%) on lambda and this was considerably larger than in short-lived monocarps. We found only weak selection pressure operating on the observed flowering strategy which was close to the predicted evolutionary stable strategy. 6. Synthesis. The extended IPM accurately described the demography of a long-lived monocarpic perennial using data collected over a relatively short period. We could show that the evolution of flowering strategies in short- and long-lived monocarps seem to follow the same general rules but with a longevity-related emphasis on survival over fecundity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Bleeding is a frequent complication during surgery. The intraoperative administration of blood products, including packed red blood cells, platelets and fresh frozen plasma (FFP), is often live saving. Complications of blood transfusions contribute considerably to perioperative costs and blood product resources are limited. Consequently, strategies to optimize the decision to transfuse are needed. Bleeding during surgery is a dynamic process and may result in major blood loss and coagulopathy due to dilution and consumption. The indication for transfusion should be based on reliable coagulation studies. While hemoglobin levels and platelet counts are available within 15 minutes, standard coagulation studies require one hour. Therefore, the decision to administer FFP has to be made in the absence of any data. Point of care testing of prothrombin time ensures that one major parameter of coagulation is available in the operation theatre within minutes. It is fast, easy to perform, inexpensive and may enable physicians to rationally determine the need for FFP. METHODS/DESIGN: The objective of the POC-OP trial is to determine the effectiveness of point of care prothrombin time testing to reduce the administration of FFP. It is a patient and assessor blind, single center randomized controlled parallel group trial in 220 patients aged between 18 and 90 years undergoing major surgery (any type, except cardiac surgery and liver transplantation) with an estimated blood loss during surgery exceeding 20% of the calculated total blood volume or a requirement of FFP according to the judgment of the physicians in charge. Patients are randomized to usual care plus point of care prothrombin time testing or usual care alone without point of care testing. The primary outcome is the relative risk to receive any FFP perioperatively. The inclusion of 110 patients per group will yield more than 80% power to detect a clinically relevant relative risk of 0.60 to receive FFP of the experimental as compared with the control group. DISCUSSION: Point of care prothrombin time testing in the operation theatre may reduce the administration of FFP considerably, which in turn may decrease costs and complications usually associated with the administration of blood products. TRIAL REGISTRATION: NCT00656396.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A publication entitled “A default mode of brain function” initiated a new way of looking at functional imaging data. In this PET study the authors discussed the often-observed consistent decrease of brain activation in a variety of tasks as compared with the baseline. They suggested that this deactivation is due to a task-induced suspension of a default mode of brain function that is active during rest, i.e. that there exists intrinsic well-organized brain activity during rest in several distinct brain regions. This suggestion led to a large number of imaging studies on the resting state of the brain and to the conclusion that the study of this intrinsic activity is crucial for understanding how the brain works. The fact that the brain is active during rest has been well known from a variety of EEG recordings for a very long time. Different states of the brain in the sleep–wake continuum are characterized by typical patterns of spontaneous oscillations in different frequency ranges and in different brain regions. Best studied are the evolving states during the different sleep stages, but characteristic EEG oscillation patterns have also been well described during awake periods (see Chapter 1 for details). A highly recommended comprehensive review on the brain's default state defined by oscillatory electrical brain activities is provided in the recent book by György Buzsaki, showing how these states can be measured by electrophysiological procedures at the global brain level as well as at the local cellular level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even back-in-time debuggers do not help answer the question, ``Where did this object come from?'' The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Nocturnal dreams can be considered as a kind of simulation of the real world on a higher cognitive level (Erlacher & Schredl, 2008). Within lucid dreams, the dreamer is aware of the dream state and thus able to control the ongoing dream content. Previous studies could demonstrate that it is possible to practice motor tasks during lucid dreams and doing so improved performance while awake (Erlacher & Schredl, 2010). Even though lucid dream practice might be a promising kind of cognitive rehearsal in sports, little is known about the characteristics of actions in lucid dreams. The purpose of the present study was to explore the relationship between time in dreams and wakefulness because in an earlier study (Erlacher & Schredl, 2004) we found that performing squads took lucid dreamers 44.5 % more time than in the waking state while for counting the same participants showed no differences between dreaming and wakefulness. To find out if the task modality, the task length or the task complexity require longer times in lucid dreams than in wakefulness three experiments were conducted. Methods: In the first experiment five proficient lucid dreamers spent two to three non-consecutive nights in the sleep laboratory with polysomnographic recording to control for REM sleep and determine eye signals. Participants counted from 1-10, 1-20 and 1-30 in wakefulness and in their lucid dreams. While dreaming they marked onset of lucidity as well as beginning and end of the counting task with a Left-Right-Left-Right eye movement and reported their dreams after being awakened. The same procedure was used for the second experiment with seven lucid dreamers except that they had to walk 10, 20 or 30 steps. In the third experiment nine participants performed an exercise involving gymnastics elements such as various jumps and a roll. To control for length of the task the gymnastic exercise in the waking state lasted about the same time as walking 10 steps. Results: As a general result we found – as in the study before – that performing a task in the lucid dream requires more time than in wakefulness. This tendency was found for all three tasks. However, there was no difference for the task modality (counting vs. motor task). Also the relative time for the different lengths of the tasks showed no difference. And finally, the more complex motor task (gymnastic routine) did not require more time in lucid dreams than the simple motor task. Discussion/Conclusion: The results showed that there is a robust effect of time in lucid dreams compared to wakefulness. The three experiments could not explain that those differences are caused by task modality, task length or task complexity. Therefore further possible candidates needs to be investigated e.g. experience in lucid dreaming or psychological variables. References: Erlacher, D. & Schredl, M. (2010). Practicing a motor task in a lucid dream enhances subsequent performance: A pilot study. The Sport Psychologist, 24(2), 157-167. Erlacher, D. & Schredl, M. (2008). Do REM (lucid) dreamed and executed actions share the same neural substrate? International Journal of Dream Research, 1(1), 7-13. Erlacher, D. & Schredl, M. (2004). Time required for motor activity in lucid dreams. Perceptual and Motor Skills, 99, 1239-1242.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

If change over time is compared in several groups, it is important to take into account baseline values so that the comparison is carried out under the same preconditions. As the observed baseline measurements are distorted by measurement error, it may not be sufficient to include them as covariate. By fitting a longitudinal mixed-effects model to all data including the baseline observations and subsequently calculating the expected change conditional on the underlying baseline value, a solution to this problem has been provided recently so that groups with the same baseline characteristics can be compared. In this article, we present an extended approach where a broader set of models can be used. Specifically, it is possible to include any desired set of interactions between the time variable and the other covariates, and also, time-dependent covariates can be included. Additionally, we extend the method to adjust for baseline measurement error of other time-varying covariates. We apply the methodology to data from the Swiss HIV Cohort Study to address the question if a joint infection with HIV-1 and hepatitis C virus leads to a slower increase of CD4 lymphocyte counts over time after the start of antiretroviral therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adding to the on-going debate regarding vegetation recolonisation (more particularly the timing) in Europe and climate change since the Lateglacial, this study investigates a long sediment core (LL081) from Lake Ledro (652ma.s.l., southern Alps, Italy). Environmental changes were reconstructed using multiproxy analysis (pollen-based vegetation and climate reconstruction, lake levels, magnetic susceptibility and X-ray fluorescence (XRF) measurements) recorded climate and land-use changes during the Lateglacial and early-middle Holocene. The well-dated and high-resolution pollen record of Lake Ledro is compared with vegetation records from the southern and northern Alps to trace the history of tree species distribution. An altitudedependent progressive time delay of the first continuous occurrence of Abies (fir) and of the Larix (larch) development has been observed since the Lateglacial in the southern Alps. This pattern suggests that the mid-altitude Lake Ledro area was not a refuge and that trees originated from lowlands or hilly areas (e.g. Euganean Hills) in northern Italy. Preboreal oscillations (ca. 11 000 cal BP), Boreal oscillations (ca. 10 200, 9300 cal BP) and the 8.2 kyr cold event suggest a centennial-scale climate forcing in the studied area. Picea (spruce) expansion occurred preferentially around 10 200 and 8200 cal BP in the south-eastern Alps, and therefore reflects the long-lasting cumulative effects of successive boreal and the 8.2 kyr cold event. The extension of Abies is contemporaneous with the 8.2 kyr event, but its development in the southern Alps benefits from the wettest interval 8200-7300 cal BP evidenced in high lake levels, flood activity and pollen-based climate reconstructions. Since ca. 7500 cal BP, a weak signal of pollen-based anthropogenic activities suggest weak human impact. The period between ca. 5700 and ca. 4100 cal BP is considered as a transition period to colder and wetter conditions (particularly during summers) that favoured a dense beech (Fagus) forest development which in return caused a distinctive yew (Taxus) decline.We conclude that climate was the dominant factor controlling vegetation changes and erosion processes during the early and middle Holocene (up to ca. 4100 cal BP).