963 resultados para Time Limit
Resumo:
BACKGROUND The severity of aortic regurgitation can be estimated using pressure half time (PHT) of the aortic regurgitation flow velocity, but the correlation between regurgitant fraction and PHT is weak. AIM To test the hypothesis that the association between PHT and regurgitant fraction is substantially influenced by left ventricular relaxation. METHODS In 63 patients with aortic regurgitation, subdivided into a group without (n = 22) and a group with (n = 41) left ventricular hypertrophy, regurgitant fraction was calculated using the difference between right and left ventricular cardiac outputs. Left ventricular relaxation was assessed using the early to late diastolic Doppler tissue velocity ratio of the mitral annulus (E/ADTI), the E/A ratio of mitral inflow (E/AM), and the E deceleration time (E-DT). Left ventricular hypertrophy was assessed using the M mode derived left ventricular mass index. RESULTS The overall correlation between regurgitant fraction and PHT was weak (r = 0.36, p < 0.005). In patients without left ventricular hypertrophy, there was a significant correlation between regurgitant fraction and PHT (r = 0.62, p < 0.005), but not in patients with left ventricular hypertrophy. In patients with a left ventricular relaxation abnormality (defined as E/ADTI< 1, E/AM< age corrected lower limit, E-DT >/= 220 ms), no associations between regurgitant fraction and PHT were found, whereas in patients without left ventricular relaxation abnormalities, the regurgitant fraction to PHT relations were significant (normal E/AM: r = 0.57, p = 0.02; E-DT< 220 ms: r = 0.50, p < 0.001; E/ADTI < 1: r = 0.57, p = 0.02). CONCLUSIONS Only normal left ventricular relaxation allows a significant decay of PHT with increasing aortic regurgitation severity. In abnormal relaxation, which is usually present in left ventricular hypertrophy, wide variation in prolonged backward left ventricular filling may cause dissociation between the regurgitant fraction and PHT. Thus the PHT method should only be used in the absence of left ventricular relaxation abnormalities.
Resumo:
The accuracy of Global Positioning System (GPS) time series is degraded by the presence of offsets. To assess the effectiveness of methods that detect and remove these offsets, we designed and managed the Detection of Offsets in GPS Experiment. We simulated time series that mimicked realistic GPS data consisting of a velocity component, offsets, white and flicker noises (1/f spectrum noises) composed in an additive model. The data set was made available to the GPS analysis community without revealing the offsets, and several groups conducted blind tests with a range of detection approaches. The results show that, at present, manual methods (where offsets are hand picked) almost always give better results than automated or semi‒automated methods (two automated methods give quite similar velocity bias as the best manual solutions). For instance, the fifth percentile range (5% to 95%) in velocity bias for automated approaches is equal to 4.2 mm/year (most commonly ±0.4 mm/yr from the truth), whereas it is equal to 1.8 mm/yr for the manual solutions (most commonly 0.2 mm/yr from the truth). The magnitude of offsets detectable by manual solutions is smaller than for automated solutions, with the smallest detectable offset for the best manual and automatic solutions equal to 5 mm and 8 mm, respectively. Assuming the simulated time series noise levels are representative of real GPS time series, robust geophysical interpretation of individual site velocities lower than 0.2–0.4 mm/yr is therefore certainly not robust, although a limit of nearer 1 mm/yr would be a more conservative choice. Further work to improve offset detection in GPS coordinates time series is required before we can routinely interpret sub‒mm/yr velocities for single GPS stations.
Resumo:
Although heterogeneity and time are central aspects of economic activity, it was predominantly the Austrian School of economics that emphasized these two aspects. In this paper we argue that the explicit consideration of heterogeneity and time is of increasing importance due to the increasing environmental and resource problems faced by humankind today. It is shown that neo-Austrian capital theory, which revived Austrian ideas employing a formal approach in the 1970s, is not only well suited to address issues of structural change and of accompanying unemployment induced by technical progress but also can be employed for an encompassing ecological-economic analysis demanded by ecological economics. However, complexity, uncertainty, and real ignorance limit the applicability of formal economic analysis. Therefore, we conclude that economic analysis has to be supplemented by considerations of political philosophy. Copyright 2006 American Journal of Economics and Sociology, Inc..
Resumo:
An updated search is performed for gluino, top squark, or bottom squark R-hadrons that have come to rest within the ATLAS calorimeter, and decay at some later time to hadronic jets and a neutralino, using 5.0 and 22.9 fb(-1) of pp collisions at 7 and 8 TeV, respectively. Candidate decay events are triggered in selected empty bunch crossings of the LHC in order to remove pp collision backgrounds. Selections based on jet shape and muon system activity are applied to discriminate signal events from cosmic ray and beam-halo muon backgrounds. In the absence of an excess of events, improved limits are set on gluino, stop, and sbottom masses for different decays, lifetimes, and neutralino masses. With a neutralino of mass 100 GeV, the analysis excludes gluinos with mass below 832 GeV (with an expected lower limit of 731 GeV), for a gluino lifetime between 10 mu s and 1000 s in the generic R-hadron model with equal branching ratios for decays to q (q) over bar(chi) over tilde (0) and g (chi) over tilde (0). Under the same assumptions for the neutralino mass and squark lifetime, top squarks and bottom squarks in the Regge R-hadron model are excluded with masses below 379 and 344 GeV, respectively.
Resumo:
The analytic continuation needed for the extraction of transport coefficients necessitates in principle a continuous function of the Euclidean time variable. We report on progress towards achieving the continuum limit for 2-point correlator measurements in thermal SU(3) gauge theory, with specific attention paid to scale setting. In particular, we improve upon the determination of the critical lattice coupling and the critical temperature of pure SU(3) gauge theory, estimating r0Tc ≃ 0.7470(7) after a continuum extrapolation. As an application the determination of the heavy quark momentum diffusion coefficient from a correlator of colour-electric fields attached to a Polyakov loop is discussed.
Resumo:
BACKGROUND Flavobacterium psychrophilum is the agent of Bacterial Cold Water Disease and Rainbow Trout Fry Syndrome, two diseases leading to high mortality. Pathogen detection is mainly carried out using cultures and more rapid and sensitive methods are needed. RESULTS We describe a qPCR technique based on the single copy gene β' DNA-dependent RNA polymerase (rpoC). Its detection limit was 20 gene copies and the quantification limit 103 gene copies per reaction. Tests on spiked spleens with known concentrations of F. psychrophilum (106 to 101 cells per reaction) showed no cross-reactions between the spleen tissue and the primers and probe. Screening of water samples and spleens from symptomless and infected fishes indicated that the pathogen was already present before the outbreaks, but F. psychrophilum was only quantifiable in spleens from diseased fishes. CONCLUSIONS This qPCR can be used as a highly sensitive and specific method to detect F. psychrophilum in different sample types without the need for culturing. qPCR allows a reliable detection and quantification of F. psychrophilum in samples with low pathogen densities. Quantitative data on F. psychrophilum abundance could be useful to investigate risk factors linked to infections and also as early warning system prior to potential devastating outbreak.
Resumo:
BACKGROUND Fetal weight estimation (FWE) is an important factor for clinical management decisions, especially in imminent preterm birth at the limit of viability between 23(0/7) and 26(0/7) weeks of gestation. It is crucial to detect and eliminate factors that have a negative impact on the accuracy of FWE. DATA SOURCES In this systematic literature review, we investigated 14 factors that may influence the accuracy of FWE, in particular in preterm neonates born at the limit of viability. RESULTS We found that gestational age, maternal body mass index, amniotic fluid index and ruptured membranes, presentation of the fetus, location of the placenta and the presence of multiple fetuses do not seem to have an impact on FWE accuracy. The influence of the examiner's grade of experience and that of fetal gender were discussed controversially. Fetal weight, time interval between estimation and delivery and the use of different formulas seem to have an evident effect on FWE accuracy. No results were obtained on the impact of active labor. DISCUSSION This review reveals that only few studies investigated factors possibly influencing the accuracy of FWE in preterm neonates at the limit of viability. Further research in this specific age group on potential confounding factors is needed.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.
Resumo:
We introduce a multistable subordinator, which generalizes the stable subordinator to the case of time-varying stability index. This enables us to define a multifractional Poisson process. We study properties of these processes and establish the convergence of a continuous-time random walk to the multifractional Poisson process.
Resumo:
Climatic relationships were established in two 210Pb dated pollen sequences from small mires closely surrounded by forest just below actual forest limits (but about 300 m below potential climatic forest limits) in the northern Swiss Alps (suboceanic in climate; mainly with Picea) and the central Swiss Alps (subcontinental; mainly Pinus cembra and Larix) at annual or near-annual resolution from ad 1901 to 1996. Effects of vegetational succession were removed by splitting the time series into early and late periods and by linear detrending. Both pollen concentrations detrended by the depth-age model and modified percentages (in which counts of dominant pollen types are down-weighted) are correlated by simple linear regression with smoothed climatic parameters with one-and two-year timelags, including average monthly and April/September daylight air temperatures and with seasonal and annual precipitation sums. Results from detrended pollen concentrations suggest that peat accumulation is favoured in the northern-Alpine mire either by early snowmelt or by summer precipitation, but in the central-Alpine mire by increased precipitation and cooler summers, suggesting a position of the northern-Alpine mire near the upper altitudinal limit of peat formation, but of the central-Alpine mire near the lower limit. Results from modified pollen percentages indicate that pollen pro duction by plants growing near their upper altitudinal limit is limited by insufficient warmth in summer, and pollen production by plants growing near their lower altitudinal limit is limited by too-high temperatures. Only weakly significant pollen/climate relationships were found for Pinus cembra and Larix, probably because they experience little climatic stress growing 300 m below the potential climatic forest limit.
Resumo:
OBJECTIVE: To evaluate serum concentrations of biochemical markers and survival time in dogs with protein-losing enteropathy (PLE). DESIGN: Prospective study. ANIMALS: 29 dogs with PLE and 18 dogs with food-responsive diarrhea (FRD). PROCEDURES: Data regarding serum concentrations of various biochemical markers at the initial evaluation were available for 18 of the 29 dogs with PLE and compared with findings for dogs with FRD. Correlations between biochemical marker concentrations and survival time (interval between time of initial evaluation and death or euthanasia) for dogs with PLE were evaluated. RESULTS: Serum C-reactive protein concentration was high in 13 of 18 dogs with PLE and in 2 of 18 dogs with FRD. Serum concentration of canine pancreatic lipase immunoreactivity was high in 3 dogs with PLE but within the reference interval in all dogs with FRD. Serum α1-proteinase inhibitor concentration was less than the lower reference limit in 9 dogs with PLE and 1 dog with FRD. Compared with findings in dogs with FRD, values of those 3 variables in dogs with PLE were significantly different. Serum calprotectin (measured by radioimmunoassay and ELISA) and S100A12 concentrations were high but did not differ significantly between groups. Seventeen of the 29 dogs with PLE were euthanized owing to this disease; median survival time was 67 days (range, 2 to 2,551 days). CONCLUSIONS AND CLINICAL RELEVANCE: Serum C-reactive protein, canine pancreatic lipase immunoreactivity, and α1-proteinase inhibitor concentrations differed significantly between dogs with PLE and FRD. Most initial biomarker concentrations were not predictive of survival time in dogs with PLE.
Resumo:
Through dedicated measurements in the optical regime we demonstrate that ptychography can be applied to reconstruct complex-valued object functions that vary with time from a sequence of spectral measurements. A probe pulse of approximately 1 ps duration, time delayed in increments of 0.25 ps, is shown to recover dynamics on a ten times faster time scale with an experimental limit of approximately 5 fs.
Resumo:
We investigate the transition from unitary to dissipative dynamics in the relativistic O(N) vector model with the λ(φ2)2 interaction using the nonperturbative functional renormalization group in the real-time formalism. In thermal equilibrium, the theory is characterized by two scales, the interaction range for coherent scattering of particles and the mean free path determined by the rate of incoherent collisions with excitations in the thermal medium. Their competition determines the renormalization group flow and the effective dynamics of the model. Here we quantify the dynamic properties of the model in terms of the scale-dependent dynamic critical exponent z in the limit of large temperatures and in 2≤d≤4 spatial dimensions. We contrast our results to the behavior expected at vanishing temperature and address the question of the appropriate dynamic universality class for the given microscopic theory.
Resumo:
Can the early identification of the species of staphylococcus responsible for infection by the use of Real Time PCR technology influence the approach to the treatment of these infections? ^ This study was a retrospective cohort study in which two groups of patients were compared. The first group, ‘Physician Aware’ consisted of patients in whom physicians were informed of specific staphylococcal species and antibiotic sensitivity (using RT-PCR) at the time of notification of the gram stain. The second group, ‘Physician Unaware’ consisted of patients in whom treating physicians received the same information 24–72 hours later as a result of blood culture and antibiotic sensitivity determination. ^ The approach to treatment was compared between ‘Physician Aware’ and ‘Physician Unaware’ groups for three different microbiological diagnoses—namely MRSA, MSSA and no-SA (or coagulase negative Staphylococcus). ^ For a diagnosis of MRSA, the mean time interval to the initiation of Vancomycin therapy was 1.08 hours in the ‘Physician Aware’ group as compared to 5.84 hours in the ‘Physician Unaware’ group (p=0.34). ^ For a diagnosis of MSSA, the mean time interval to the initiation of specific anti-MSSA therapy with Nafcillin was 5.18 hours in the ‘Physician Aware’ group as compared to 49.8 hours in the ‘Physician Unaware’ group (p=0.007). Also, for the same diagnosis, the mean duration of empiric therapy in the ‘Physician Aware’ group was 19.68 hours as compared to 80.75 hours in the ‘Physician Unaware’ group (p=0.003) ^ For a diagnosis of no-SA or coagulase negative staphylococcus, the mean duration of empiric therapy was 35.65 hours in the ‘Physician Aware’ group as compared to 44.38 hours in the ‘Physician Unaware’ group (p=0.07). However, when treatment was considered a categorical variable and after exclusion of all cases where anti-MRS therapy was used for unrelated conditions, only 20 of 72 cases in the ‘Physician Aware’ group received treatment as compared to 48 of 106 cases in the ‘Physician Unaware’ group. ^ Conclusions. Earlier diagnosis of MRSA may not alter final treatment outcomes. However, earlier identification may lead to the earlier institution of measures to limit the spread of infection. The early diagnosis of MSSA infection, does lead to treatment with specific antibiotic therapy at an earlier stage of treatment. Also, the duration of empiric therapy is greatly reduced by early diagnosis. The early diagnosis of coagulase negative staphylococcal infection leads to a lower rate of unnecessary treatment for these infections as they are commonly considered contaminants. ^
Resumo:
Of the large clinical trials evaluating screening mammography efficacy, none included women ages 75 and older. Recommendations on an upper age limit at which to discontinue screening are based on indirect evidence and are not consistent. Screening mammography is evaluated using observational data from the SEER-Medicare linked database. Measuring the benefit of screening mammography is difficult due to the impact of lead-time bias, length bias and over-detection. The underlying conceptual model divides the disease into two stages: pre-clinical (T0) and symptomatic (T1) breast cancer. Treating the time in these phases as a pair of dependent bivariate observations, (t0,t1), estimates are derived to describe the distribution of this random vector. To quantify the effect of screening mammography, statistical inference is made about the mammography parameters that correspond to the marginal distribution of the symptomatic phase duration (T1). This shows the hazard ratio of death from breast cancer comparing women with screen-detected tumors to those detected at their symptom onset is 0.36 (0.30, 0.42), indicating a benefit among the screen-detected cases. ^