998 resultados para Testing criteria
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
Previous research conducted in the late 1980’s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over twenty-five years old, the data used in the previous research is no longer representative of the currently installed barriers or US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. The analysis included 1,383 (596,331 weighted) real-world barrier midsection impacts selected from thirteen years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS). For each suitable case, the scene diagram and available scene photographs were used to determine roadside and barrier specific variables not available in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors toward secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of seven compared to cases with no second event present. Twenty-four full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from NCHRP Report 350. It was found that the NCHRP Report 350 exit angle criterion alone was not sufficient to predict second collision occurrence for real-world barrier crashes.
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The aim of this study was to assess the variation between neuropathologists in the diagnosis of common dementia syndromes when multiple published protocols are applied. Fourteen out of 18 Australian neuropathologists participated in diagnosing 20 cases (16 cases of dementia, 4 age-matched controls) using consensus diagnostic methods. Diagnostic criteria, clinical synopses and slides from multiple brain regions were sent to participants who were asked for case diagnoses. Diagnostic sensitivity, specificity, predictive value, accuracy and variability were determined using percentage agreement and kappa statistics. Using CERAD criteria, there was a high inter-rater agreement for cases with probable and definite Alzheimer's disease but low agreement for cases with possible Alzheimer's disease. Braak staging and the application of criteria for dementia with Lewy bodies also resulted in high inter-rater agreement. There was poor agreement for the diagnosis of frontotemporal dementia and for identifying small vessel disease. Participants rarely diagnosed more than one disease in any case. To improve efficiency when applying multiple diagnostic criteria, several simplifications were proposed and tested on 5 of the original 210 cases. Inter-rater reliability for the diagnosis of Alzheimer's disease and dementia with Lewy bodies significantly improved. Further development of simple and accurate methods to identify small vessel lesions and diagnose frontotemporal dementia is warranted.
Resumo:
There are no validated criteria for the diagnosis of sensory neuronopathy (SNN) yet. In a preliminary monocenter study a set of criteria relying on clinical and electrophysiological data showed good sensitivity and specificity for a diagnosis of probable SNN. The aim of this study was to test these criteria on a French multicenter study. 210 patients with sensory neuropathies from 15 francophone reference centers for neuromuscular diseases were included in the study with an expert diagnosis of non-SNN, SNN or suspected SNN according to the investigations performed in these centers. Diagnosis was obtained independently from the set of criteria to be tested. The expert diagnosis was taken as the reference against which the proposed SNN criteria were tested. The set relied on clinical and electrophysiological data easily obtainable with routine investigations. 9/61 (16.4 %) of non-SNN patients, 23/36 (63.9 %) of suspected SNN, and 102/113 (90.3 %) of SNN patients according to the expert diagnosis were classified as SNN by the criteria. The SNN criteria tested against the expert diagnosis in the SNN and non-SNN groups had 90.3 % (102/113) sensitivity, 85.2 % (52/61) specificity, 91.9 % (102/111) positive predictive value, and 82.5 % (52/63) negative predictive value. Discordance between the expert diagnosis and the SNN criteria occurred in 20 cases. After analysis of these cases, 11 could be reallocated to a correct diagnosis in accordance with the SNN criteria. The proposed criteria may be useful for the diagnosis of probable SNN in patients with sensory neuropathy. They can be reached with simple clinical and paraclinical investigations.
Resumo:
Background and Study Aim: Evaluation of sport skills test can be very useful tool for coach practice. The aim of the present paper was: (a) to evaluate the reliability and accuracy of the Specific Physical Fitness Tests (SPFT) (b) to review the results of karate athletes who represent different weight categories, and who are at different stages of schooling; (c) to establish grading criteria of physical fitness preparation. Material/Methods: The reseach was conducted among 219 Kyokushin karate players, whose profiles were presented as (chi) over bar +/- SD and their main characteristics were the following: age 26.8 +/- 4.67 (19-39) years, body mass 75.2 +/- 8.35 (50-97) kg and body height 176.4 +/- 5.67 (160-196) cm. The value of the BMI amounted to 24.1 +/- 2.17 (17.9-29.4) kg/m(2). All the subjects of the research had training experience of 10.5 +/- 3.71 (4-20) years and their degree of proficiency ranged from 4(th) kyu to 3(rd) dan. The physical fitness trials proposed by Story (1989) included: hip turning speed, speed punches, flexibility, rapid kicks, agility, and evasion actions. It was supplemented by a test of local strength endurance, composing a battery of the SPFT, which was implemented by first of the authors between 1991 and 2006. Results: SPFT is characterized by high reliability and it can be used to diagnose the physical fitness preparation and monitor the individual results of training. It discriminates accurately competitors with different sports level and it is characterized by very high accuracy, it is correlated with the test results of motor general physical fitness abilities and coordination abilities as well as it is connected with the somatic build of the athlete. The performance classification table was developed on the basis of our research. Discussion: Results obtained in SPFT were shortly discussed. Conclusions: The collected results of our research allowed us to come to, the conclusion: The table can be applied not only to assess karate fighters, but also adepts in taekwondo, kick-boxing, ju-jitsu, hapkido or other mixed martial arts.
Resumo:
Conventional procedures used to assess the integrity of corroded piping systems with axial defects generally employ simplified failure criteria based upon a plastic collapse failure mechanism incorporating the tensile properties of the pipe material. These methods establish acceptance criteria for defects based on limited experimental data for low strength structural steels which do not necessarily address specific requirements for the high grade steels currently used. For these cases, failure assessments may be overly conservative or provide significant scatter in their predictions, which lead to unnecessary repair or replacement of in-service pipelines. Motivated by these observations, this study examines the applicability of a stress-based criterion based upon plastic instability analysis to predict the failure pressure of corroded pipelines with axial defects. A central focus is to gain additional insight into effects of defect geometry and material properties on the attainment of a local limit load to support the development of stress-based burst strength criteria. The work provides an extensive body of results which lend further support to adopt failure criteria for corroded pipelines based upon ligament instability analyses. A verification study conducted on burst testing of large-diameter pipe specimens with different defect length shows the effectiveness of a stress-based criterion using local ligament instability in burst pressure predictions, even though the adopted burst criterion exhibits a potential dependence on defect geometry and possibly on material`s strain hardening capacity. Overall, the results presented here suggests that use of stress-based criteria based upon plastic instability analysis of the defect ligament is a valid engineering tool for integrity assessments of pipelines with axial corroded defects. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Objective: To evaluate patients with Diabetes Mellitus type 2 and painful peripheral neuropathy in order to investigate oral complaints and facial somatosensory findings. Research design and methods: Case-control study; 29 patients (12 women, mean age 57.86 yo) with Diabetes Mellitus type 2 and 31 age-gender-matched controls were evaluated with a standardized protocol for general characteristics, orofacial pain, research diagnostic criteria for temporomandibular disorders, visual analogue scale and McGill Pain questionnaire, and a systematic protocol of quantitative sensory testing for bilateral facial sensitivity at the areas innervated by the trigeminal branches, which included the thermal detection by ThermoSensi 2, tactile evaluation with vonFrey filaments, and superficial pain thresholds with a superficial algometer (Micromar). Statistical analysis was performed with Wilcoxon, chi-square, confidence intervals and Spearman (p < 0.05). Results: Orofacial pain was reported by 55.2% of patients, and the most common descriptor was fatigue (50%); 17.2% had burning mouth. Myofascial temporomandibular disorders were diagnosed in 9(31%) patients. The study group showed higher sensory thresholds of pain at the right maxillary branch (p = 0.017) but sensorial differences were not associated with pain (p = 0.608). Glycemia and HbA(1c) were positively correlated with the quantitative sensory testing results of pain (p < 0.05) and cold (p = 0.044) perceptions. Higher pain thresholds were correlated with higher glycemia and glycated hemoglobin (p = 0.027 and p = 0.026). Conclusions: There was a high prevalence of orofacial pain and burning mouth was the most common complaint. The association of loss of pain sensation and higher glycemia and glycated hemoglobin can be of clinical use for the follow-up of DM complications. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
The insulin hypoglycemia test (IHT) is widely regarded as the 'gold standard' for dynamic stimulation of the hypothalamic-pituitary-adrenal (HPA) axis. This study aimed to investigate the temporal relationship between a rapid decrease in plasma glucose and the corresponding rise in plasma adenocorticotropic hormone (ACTH), and to assess the reproducibility of hormone responses to hypoglycemia in normal humans. Ten normal subjects underwent IHTs, using an insulin dose of 0.15 U/kg. Of these, eight had a second IHT (IHT2) and three went on to a third test (IHT3). Plasma ACTH and cortisol were measured at 15-min intervals and, additionally, in four IHT2s and the three IHT3s, ACTH was measured at 2.5- or 5-min intervals. Mean glucose nadirs and mean ACTH and cortisol responses were not significantly different between IHT1, IHT2 and IHT3. Combined data from all 21 tests showed the magnitude of the cortisol responses, but not the ACTH responses, correlated significantly with the depth and duration of hypoglycemia. All subjects achieved glucose concentrations of of less than or equal to 1.6 mmol/l before any detectable rise in ACTH occurred. In the seven tests performed with frequent sampling, an ACTH rise never preceeded the glucose nadir, but occurred at the nadir, or up to 15 min after. On repeat testing, peak ACTH levels varied markedly within individuals, whereas peak cortisol levels were more reproducible (mean coefficient of variation 7%). In conclusion, hypoglycemia of less than or equal to 1.6 mmol/l was sufficient to cause stimulation of the HPA axis in all 21 IHTs conducted in normal subjects. Nonetheless; our data cannot reveal whether higher glucose nadirs would stimulate increased HPA axis activity in all subjects. Overall, the cortisol response to hypoglycemia is more reproducible than the ACTH response but, in an individual subject, the difference in peak cortisol between two IHTs may exceed 100 nmol/l.
Resumo:
Landscape metrics are widely applied in landscape ecology to quantify landscape structure. However, many are poorly tested and require rigorous validation if they are to serve as reliable indicators of habitat loss and fragmentation, such as Montreal Process Indicator 1.1e. We apply a landscape ecology theory, supported by exploratory and confirmatory statistical techniques, to empirically test landscape metrics for reporting Montreal Process Indicator 1.1e in continuous dry eucalypt forests of sub-tropical Queensland, Australia. Target biota examined included: the Yellow-bellied Glider (Petaurus australis); the diversity of nectar and sap feeding glider species including P. australis, the Sugar Glider P. breviceps, the Squirrel Glider P. norfolcensis, and the Feathertail Glider Acrobates pygmaeus; six diurnal forest birds species; total diurnal bird species diversity; and the density of nectar-feeding diurnal bird species. Two scales of influence were considered: the stand-scale (2 ha), and a series of radial landscape extents (500 m - 2 km; 78 - 1250 ha) surrounding each fauna transect. For all biota, stand-scale structural and compositional attributes were found to be more influential than landscape metrics. For the Yellow-bellied Glider, the proportion of trace habitats with a residual element of old spotted-gum/ironbark eucalypt trees was a significant landscape metric at the 2 km landscape extent. This is a measure of habitat loss rather than habitat fragmentation. For the diversity of nectar and sap feeding glider species, the proportion of trace habitats with a high coefficient of variation in patch size at the 750 m extent was a significant landscape metric. None of the landscape metrics tested was important for diurnal forest birds. We conclude that no single landscape metric adequately captures the response of the region's forest biota per se. This poses a major challenge to regional reporting of Montreal Process Indicator 1.1e, fragmentation of forest types.
Resumo:
The Brazilian National Regulatory Agency for Private Health Insurance and Plans has recently published a technical note defining the criteria for the coverage of genetic testing to diagnose hereditary cancer. In this study we show the case of a patient with a breast lesion and an extensive history of cancer referred to a private service of genetic counseling. The patient met both criteria for hereditary breast and colorectal cancer syndrome screening. Her private insurance denied coverage for genetic testing because she lacks current or previous cancer diagnosis. After she appealed by lawsuit, the court was favorable and the test was performed using next-generation sequencing. A deletion of MLH1 exon 8 was found. We highlight the importance to offer genetic testing using multigene analysis for noncancer patients.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
Abstract ST2 is a member of the interleukin-1 receptor family biomarker and circulating soluble ST2 concentrations are believed to reflect cardiovascular stress and fibrosis. Recent studies have demonstrated soluble ST2 to be a strong predictor of cardiovascular outcomes in both chronic and acute heart failure. It is a new biomarker that meets all required criteria for a useful biomarker. Of note, it adds information to natriuretic peptides (NPs) and some studies have shown it is even superior in terms of risk stratification. Since the introduction of NPs, this has been the most promising biomarker in the field of heart failure and might be particularly useful as therapy guide.