77 resultados para Threshold numbers
Resumo:
A criterion is derived for delamination onset in transversely isotropic laminated plates under small mass, high velocity impact. The resulting delamination threshold load is about 21% higher than the corresponding quasi-static threshold load. A closed form approximation for the peak impact load is then used to predict the delamination threshold velocity. The theory is validated for a range of test cases by comparison with 3D finite element simulation using LS-DYNA and a newly developed interface element to model delamination onset and growth. The predicted delamination threshold loads and velocities are in very good agreement with the finite element simulations. Good agreement is also shown in a comparison with published experimental results. In contrast to quasi-static impacts, delamination growth occurs under a rapidly decreasing load. Inclusion of finite thickness effects and a proper description of the contact stiffness are found to be vital for accurate prediction of the delamination threshold velocity
Resumo:
BACKGROUND: To compare the ability of Glaucoma Progression Analysis (GPA) and Threshold Noiseless Trend (TNT) programs to detect visual-field deterioration.
METHODS: Patients with open-angle glaucoma followed for a minimum of 2 years and a minimum of seven reliable visual fields were included. Progression was assessed subjectively by four masked glaucoma experts, and compared with GPA and TNT results. Each case was judged to be stable, deteriorated or suspicious of deterioration
RESULTS: A total of 56 eyes of 42 patients were followed with a mean of 7.8 (SD 1.0) tests over an average of 5.5 (1.04) years. Interobserver agreement to detect progression was good (mean kappa = 0.57). Progression was detected in 10-19 eyes by the experts, in six by GPA and in 24 by TNT. Using the consensus expert opinion as the gold standard (four clinicians detected progression), the GPA sensitivity and specificity were 75% and 83%, respectively, while the TNT sensitivity and specificity was 100% and 77%, respectively.
CONCLUSION: TNT showed greater concordance with the experts than GPA in the detection of visual-field deterioration. GPA showed a high specificity but lower sensitivity, mainly detecting cases of high focality and pronounced mean defect slopes.
Resumo:
Purpose: The authors sought to quantify neighboring and distant interpoint correlations of threshold values within the visual field in patients with glaucoma. Methods: Visual fields of patients with confirmed or suspected glaucoma were analyzed (n = 255). One eye per patient was included. Patients were examined using the 32 program of the Octopus 1-2-3. Linear regression analysis among each of the locations and the rest of the points of the visual field was performed, and the correlation coefficient was calculated. The degree of correlation was categorized as high (r > 0.66), moderate (0.66 = r > 0.33), or low (r = 0.33). The standard error of threshold estimation was calculated. Results: Most locations of the visual field had high and moderate correlations with neighboring points and with distant locations corresponding to the same nerve fiber bundle. Locations of the visual field had low correlations with those of the opposite hemifield, with the exception of locations temporal to the blind spot. The standard error of threshold estimation increased from 0.6 to 0.9 dB with an r reduction of 0.1. Conclusion: Locations of the visual field have highest interpoint correlation with neighboring points and with distant points in areas corresponding to the distribution of the retinal nerve fiber layer. The quantification of interpoint correlations may be useful in the design and interpretation of visual field tests in patients with glaucoma.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.
Resumo:
Mineral prospecting and raising finance for ‘junior’ mining firms has historically been regarded as a speculative activity. For the regulators of securities markets upon which ‘junior’ mining companies seek to raise capital, a perennial problem has been handling not only the indeterminacy of scientific claims, but also the social basis of epistemic practices. This paper examines the production of a system of public warrant and associated knowledge practices intended to enable investors to differentiate between ‘destructive’ and ‘productive’ varieties of financial speculation. It traces the use of the notion of ‘disclosure’ in constructing and legitimizing the ‘juniors’ market in Canada. It argues that though the work of ‘economics’ may be necessary in the construction of markets, it is by no means sufficient. Attention must also be given to the ways in which legal models of ‘the free-market’ can be translated and constantly re-worked across the sites and spaces of regulatory practice, animating the geographies of markets.
Resumo:
We propose a low-complexity closed-loop spatial multiplexing method with limited feedback over multi-input-multi-output (MIMO) fading channels. The transmit adaptation is simply performed by selecting transmit antennas (or substreams) by comparing their signal-to-noise ratios to a given threshold with a fixed nonadaptive constellation and fixed transmit power per substream. We analyze the performance of the proposed system by deriving closed-form expressions for spectral efficiency, average transmit power, and bit error rate (BER). Depending on practical system design constraints, the threshold is chosen to maximize the spectral efficiency (or minimize the average BER) subject to average transmit power and average BER (or spectral efficiency) constraints, respectively. We present numerical and Monte Carlo simulation results that validate our analysis. Compared to open-loop spatial multiplexing and other approaches that select the best antenna subset in spatial multiplexing, the numerical results illustrate that the proposed technique obtains significant power gains for the same BER and spectral efficiency. We also provide numerical results that show improvement over rate-adaptive orthogonal space-time block coding, which requires highly complex constellation adaptation. We analyze the impact of feedback delay using analytical and Monte Carlo approaches. The proposed approach is arguably the simplest possible adaptive spatial multiplexing system from an implementation point of view. However, our approach and analysis can be extended to other systems using multiple constellations and power levels.
Resumo:
We employ the time-dependent R-matrix (TDRM) method to calculate anisotropy parameters for positive and negative sidebands of selected harmonics generated by two-color two-photon above-threshold ionization of argon. We consider odd harmonics of an 800-nm field ranging from the 13th to 19th harmonic, overlapped by a fundamental 800-nm IR field. The anisotropy parameters obtained using the TDRM method are compared with those obtained using a second-order perturbation theory with a model potential approach and a soft photon approximation approach. Where available, a comparison is also made to published experimental results. All three theoretical approaches provide similar values for anisotropy parameters. The TDRM approach obtains values that are closest to published experimental values. At high photon energies, the differences between each of the theoretical methods become less significant.
Resumo:
Background There has been an increasing interest in the health effects of long
working hours, but little empirical evidence to substantiate early
10 case series suggesting an increased mortality risk. The aim of the
current study is to quantify the mortality risk associated with long
working hours and to see if this varies by employment relations and
conditions of occupation.
Methods A census-based longitudinal study of 414 949 people aged 20-59/64
15 years, working at least 35 h/week, subdivided into four occupational
classes (managerial/professional, intermediate, own account workers,
workers in routine occupations) with linkage to deaths records
over the following 8.7 years. Cox proportional hazards models were
used to examine all-cause and cause-specific mortality risk.
20 Results Overall 9.4% of the cohort worked 55 or more h/week, but this
proportion was greater in the senior management and professional
occupations and in those who were self-employed. Analysis of 4447
male and 1143 female deaths showed that hours worked were
associated with an increased risk of all-cause mortality only for
25 men working for more than 55 or more h/week in routine/semiroutine
occupations [adjusted hazard ratios (adjHR) 1.31: 95%
confidence intervals (CIs) 1.11, 1.55)] compared with their peers
working 35–40 h/week. Their equivalent risk of death from cardiovascular
disease was (adjHR 1.49: 95% CIs 1.10, 2.00).
30 Conclusions These findings substantiate and add to the earlier studies indicating
the deleterious impact of long working hours but also suggest that
the effects are moderated by employment relations or conditions of
occupation. The policy implications of these findings are discussed.
Resumo:
The environmental quality of land can be assessed by calculating relevant threshold values, which differentiate between concentrations of elements resulting from geogenic and diffuse anthropogenic sources and concentrations generated by point sources of elements. A simple process allowing the calculation of these typical threshold values (TTVs) was applied across a region of highly complex geology (Northern Ireland) to six elements of interest; arsenic, chromium, copper, lead, nickel and vanadium. Three methods for identifying domains (areas where a readily identifiable factor can be shown to control the concentration of an element) were used: k-means cluster analysis, boxplots and empirical cumulative distribution functions (ECDF). The ECDF method was most efficient at determining areas of both elevated and reduced concentrations and was used to identify domains in this investigation. Two statistical methods for calculating normal background concentrations (NBCs) and upper limits of geochemical baseline variation (ULBLs), currently used in conjunction with legislative regimes in the UK and Finland respectively, were applied within each domain. The NBC methodology was constructed to run within a specific legislative framework, and its use on this soil geochemical data set was influenced by the presence of skewed distributions and outliers. In contrast, the ULBL methodology was found to calculate more appropriate TTVs that were generally more conservative than the NBCs. TTVs indicate what a "typical" concentration of an element would be within a defined geographical area and should be considered alongside the risk that each of the elements pose in these areas to determine potential risk to receptors.
Resumo:
Loss-of-mains protection is an important component of the protection systems of embedded generation. The role of loss-of-mains is to disconnect the embedded generator from the utility grid in the event that connection to utility dispatched generation is lost. This is necessary for a number of reasons, including the safety of personnel during fault restoration and the protection of plant against out-of-synchronism reclosure to the mains supply. The incumbent methods of loss-of-mains protection were designed when the installed capacity of embedded generation was low, and known problems with nuisance tripping of the devices were considered acceptable because of the insignificant consequence to system operation. With the dramatic increase in the installed capacity of embedded generation over the last decade, the limitations of current islanding detection methods are no longer acceptable. This study describes a new method of loss-of-mains protection based on phasor measurement unit (PMU) technology, specifically using a low cost PMU device of the authors' design which has been developed for distribution network applications. The proposed method addresses the limitations of the incumbent methods, providing a solution that is free of nuisance tripping and has a zero non-detection zone. This system has been tested experimentally and is shown to be practical, feasible and effective. Threshold settings for the new method are recommended based on data acquired from both the Great Britain and Ireland power systems.