986 resultados para Threshold models
Resumo:
AIM: To identify factors that potentially influence urethral sensitivity in women. PATIENTS AND METHODS: The current perception threshold was measured by double ring electrodes in the proximal and distal urethra in 120 women. Univariate analysis using Kaplan-Meier models and multivariate analysis applying Cox regressions were performed to identify factors influencing urethral sensitivity in women. RESULTS: In univariate and multivariate analysis, women who had undergone radical pelvic surgery (radical cystectomy n = 12, radical rectal surgery n = 4) showed a significantly (log rank test P < 0.0001) increased proximal urethral sensory threshold compared to those without prior surgery (hazard ratio (HR) 4.17, 95% confidence interval (CI) 2.04-8.51), following vaginal hysterectomy (HR 4.95, 95% CI 2.07-11.85), abdominal hysterectomy (HR 5.96, 95% CI 2.68-13.23), or other non-pelvic surgery (HR 4.86, 95% CI 2.24-10.52). However, distal urethral sensitivity was unaffected by any form of prior surgery. Also other variables assessed, including age, concomitant diseases, urodynamic diagnoses, functional urethral length, and maximum urethral closure pressure at rest had no influence on urethral sensitivity in univariate as well as in multivariate analysis. CONCLUSIONS: Increased proximal but unaffected distal urethral sensory threshold after radical pelvic surgery in women suggests that the afferent nerve fibers from the proximal urethra mainly pass through the pelvic plexus which is prone to damage during radical pelvic surgery, whereas the afferent innervation of the distal urethra is provided by the pudendal nerve. Better understanding the innervation of the proximal and distal urethra may help to improve surgical procedures, especially nerve sparing techniques. Neurourol. Urodynam. (c) 2006 Wiley-Liss, Inc.
Resumo:
Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.
Resumo:
We seek to determine the relationship between threshold and suprathreshold perception for position offset and stereoscopic depth perception under conditions that elevate their respective thresholds. Two threshold-elevating conditions were used: (1) increasing the interline gap and (2) dioptric blur. Although increasing the interline gap increases position (Vernier) offset and stereoscopic disparity thresholds substantially, the perception of suprathreshold position offset and stereoscopic depth remains unchanged. Perception of suprathreshold position offset also remains unchanged when the Vernier threshold is elevated by dioptric blur. We show that such normalization of suprathreshold position offset can be attributed to the topographical-map-based encoding of position. On the other hand, dioptric blur increases the stereoscopic disparity thresholds and reduces the perceived suprathreshold stereoscopic depth, which can be accounted for by a disparity-computation model in which the activities of absolute disparity encoders are multiplied by a Gaussian weighting function that is centered on the horopter. Overall, the statement "equal suprathreshold perception occurs in threshold-elevated and unelevated conditions when the stimuli are equally above their corresponding thresholds" describes the results better than the statement "suprathreshold stimuli are perceived as equal when they are equal multiples of their respective threshold values."
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.
Resumo:
We present recent improvements of the modeling of the disruption of strength dominated bodies using the Smooth Particle Hydrodynamics (SPH) technique. The improvements include an updated strength model and a friction model, which are successfully tested by a comparison with laboratory experiments. In the modeling of catastrophic disruptions of asteroids, a comparison between old and new strength models shows no significant deviation in the case of targets which are initially non-porous, fully intact and have a homogeneous structure (such as the targets used in the study by Benz and Asphaug, 1999). However, for many cases (e.g. initially partly or fully damaged targets and rubble-pile structures) we find that it is crucial that friction is taken into account and the material has a pressure dependent shear strength. Our investigations of the catastrophic disruption threshold (27, as a function of target properties and target sizes up to a few 100 km show that a fully damaged target modeled without friction has a Q(D)*:, which is significantly (5-10 times) smaller than in the case where friction is included. When the effect of the energy dissipation due to compaction (pore crushing) is taken into account as well, the targets become even stronger (Q(D)*; is increased by a factor of 2-3). On the other hand, cohesion is found to have an negligible effect at large scales and is only important at scales less than or similar to 1 km. Our results show the relative effects of strength, friction and porosity on the outcome of collisions among small (less than or similar to 1000 km) bodies. These results will be used in a future study to improve existing scaling laws for the outcome of collisions (e.g. Leinhardt and Stewart, 2012). (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Objective. In 2009, the International Expert Committee recommended the use of HbA1c test for diagnosis of diabetes. Although it has been recommended for the diagnosis of diabetes, its precise test performance among Mexican Americans is uncertain. A strong “gold standard” would rely on repeated blood glucose measurement on different days, which is the recommended method for diagnosing diabetes in clinical practice. Our objective was to assess test performance of HbA1c in detecting diabetes and pre-diabetes against repeated fasting blood glucose measurement for the Mexican American population living in United States-Mexico border. Moreover, we wanted to find out a specific and precise threshold value of HbA1c for Diabetes Mellitus (DM) and pre-diabetes for this high-risk population which might assist in better diagnosis and better management of patient diabetes. ^ Research design and methods. We used CCHC dataset for our study. In 2004, the Cameron County Hispanic Cohort (CCHC), now numbering 2,574, was established drawn from randomly selected households on the basis of 2000 Census tract data. The CCHC study randomly selected a subset of people (aged 18-64 years) in CCHC cohort households to determine the influence of SES on diabetes and obesity. Among the participants in Cohort-2000, 67.15% are female; all are Hispanic. ^ Individuals were defined as having diabetes mellitus (Fasting plasma glucose [FPG] ≥ 126 mg/dL or pre-diabetes (100 ≤ FPG < 126 mg/dL). HbA1c test performance was evaluated using receiver operator characteristic (ROC) curves. Moreover, change-point models were used to determine HbA1c thresholds compatible with FPG thresholds for diabetes and pre-diabetes. ^ Results. When assessing Fasting Plasma Glucose (FPG) is used to detect diabetes, the sensitivity and specificity of HbA1c≥ 6.5% was 75% and 87% respectively (area under the curve 0.895). Additionally, when assessing FPG to detect pre-diabetes, the sensitivity and specificity of HbA1c≥ 6.0% (ADA recommended threshold) was 18% and 90% respectively. The sensitivity and specificity of HbA1c≥ 5.7% (International Expert Committee recommended threshold) for detecting pre-diabetes was 31% and 78% respectively. ROC analyses suggest HbA1c as a sound predictor of diabetes mellitus (area under the curve 0.895) but a poorer predictor for pre-diabetes (area under the curve 0.632). ^ Conclusions. Our data support the current recommendations for use of HbA1c in the diagnosis of diabetes for the Mexican American population as it has shown reasonable sensitivity, specificity and accuracy against repeated FPG measures. However, use of HbA1c may be premature for detecting pre-diabetes in this specific population because of the poor sensitivity with FPG. It might be the case that HbA1c is differentiating the cases more effectively who are at risk of developing diabetes. Following these pre-diabetic individuals for a longer-term for the detection of incident diabetes may lead to more confirmatory result.^
Resumo:
Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.
Resumo:
The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered.
Resumo:
Retinitis pigmentosa (RP) is a group of inherited blinding diseases caused by mutations in multiple genes including RDS. RDS encodes rds/peripherin (rds), a 36-kDa glycoprotein in the rims of rod and cone outer-segment (OS) discs. Rom1 is related to rds with similar membrane topology and the identical distribution in OS. In contrast to RDS, no mutations in ROM1 alone have been associated with retinal disease. However, an unusual digenic form of RP has been described. Affected individuals in several families were doubly heterozygous for a mutation in RDS causing a leucine 185 to proline substitution in rds (L185P) and a null mutation in ROM1. Neither mutation alone caused clinical abnormalities. Here, we generated transgenic/knockout mice that duplicate the amino acid substitutions and predicted levels of rds and rom1 in patients with RDS-mediated digenic and dominant RP. Photoreceptor degeneration in the mouse model of digenic RP was faster than in the wild-type and monogenic controls by histological, electroretinographic, and biochemical analysis. We observed a positive correlation between the rate of photoreceptor loss and the extent of OS disorganization in mice of several genotypes. Photoreceptor degeneration in RDS-mediated RP appears to be caused by a simple deficiency of rds and rom1. The critical threshold for the combined abundance of rds and rom1 is ≈60% of wild type. Below this value, the extent of OS disorganization results in clinically significant photoreceptor degeneration.
Resumo:
In this paper we examine the time T to reach a critical number K0 of infections during an outbreak in an epidemic model with infective and susceptible immigrants. The underlying process X, which was first introduced by Ridler-Rowe (1967), is related to recurrent diseases and it appears to be analytically intractable. We present an approximating model inspired from the use of extreme values, and we derive formulae for the Laplace-Stieltjes transform of T and its moments, which are evaluated by using an iterative procedure. Numerical examples are presented to illustrate the effects of the contact and removal rates on the expected values of T and the threshold K0, when the initial time instant corresponds to an invasion time. We also study the exact reproduction number Rexact,0 and the population transmission number Rp, which are random versions of the basic reproduction number R0.
Resumo:
One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode mismatch on its operation, and relate these error models to current fault-tolerant-threshold estimates.
Resumo:
A fundamental problem for any visual system with binocular overlap is the combination of information from the two eyes. Electrophysiology shows that binocular integration of luminance contrast occurs early in visual cortex, but a specific systems architecture has not been established for human vision. Here, we address this by performing binocular summation and monocular, binocular, and dichoptic masking experiments for horizontal 1 cycle per degree test and masking gratings. These data reject three previously published proposals, each of which predict too little binocular summation and insufficient dichoptic facilitation. However, a simple development of one of the rejected models (the twin summation model) and a completely new model (the two-stage model) provide very good fits to the data. Two features common to both models are gently accelerating (almost linear) contrast transduction prior to binocular summation and suppressive ocular interactions that contribute to contrast gain control. With all model parameters fixed, both models correctly predict (1) systematic variation in psychometric slopes, (2) dichoptic contrast matching, and (3) high levels of binocular summation for various levels of binocular pedestal contrast. A review of evidence from elsewhere leads us to favor the two-stage model. © 2006 ARVO.
Resumo:
A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
The initial image-processing stages of visual cortex are well suited to a local (patchwise) analysis of the viewed scene. But the world's structures extend over space as textures and surfaces, suggesting the need for spatial integration. Most models of contrast vision fall shy of this process because (i) the weak area summation at detection threshold is attributed to probability summation (PS) and (ii) there is little or no advantage of area well above threshold. Both of these views are challenged here. First, it is shown that results at threshold are consistent with linear summation of contrast following retinal inhomogeneity, spatial filtering, nonlinear contrast transduction and multiple sources of additive Gaussian noise. We suggest that the suprathreshold loss of the area advantage in previous studies is due to a concomitant increase in suppression from the pedestal. To overcome this confound, a novel stimulus class is designed where: (i) the observer operates on a constant retinal area, (ii) the target area is controlled within this summation field, and (iii) the pedestal is fixed in size. Using this arrangement, substantial summation is found along the entire masking function, including the region of facilitation. Our analysis shows that PS and uncertainty cannot account for the results, and that suprathreshold summation of contrast extends over at least seven target cycles of grating. © 2007 The Royal Society.