982 resultados para casuality testing in VaRs with bootstrapping
Resumo:
The basophil activation test (BAT) has become a pervasive test for allergic response through the development of flow cytometry, discovery of activation markers such as CD63 and unique markers identifying basophil granulocytes. Basophil activation test measures basophil response to allergen cross-linking IgE on between 150 and 2000 basophil granulocytes in <0.1 ml fresh blood. Dichotomous activation is assessed as the fraction of reacting basophils. In addition to clinical history, skin prick test, and specific IgE determination, BAT can be a part of the diagnostic evaluation of patients with food-, insect venom-, and drug allergy and chronic urticaria. It may be helpful in determining the clinically relevant allergen. Basophil sensitivity may be used to monitor patients on allergen immunotherapy, anti-IgE treatment or in the natural resolution of allergy. Basophil activation test may use fewer resources and be more reproducible than challenge testing. As it is less stressful for the patient and avoids severe allergic reactions, BAT ought to precede challenge testing. An important next step is to standardize BAT and make it available in diagnostic laboratories. The nature of basophil activation as an ex vivo challenge makes it a multifaceted and promising tool for the allergist. In this EAACI task force position paper, we provide an overview of the practical and technical details as well as the clinical utility of BAT in diagnosis and management of allergic diseases.
Resumo:
Due to the lack of exercise testing devices that can be employed in stroke patients with severe disability, the aim of this PhD research was to investigate the clinical feasibility of using a robotics-assisted tilt table (RATT) as a method for cardiopulmonary exercise testing (CPET) and exercise training in stroke patients. For this purpose, the RATT was augmented with force sensors, a visual feedback system and a work rate calculation algorithm. As the RATT had not been used previously for CPET, the first phase of this project focused on a feasibility study in 11 healthy able-bodied subjects. The results demonstrated substantial cardiopulmonary responses, no complications were found, and the method was deemed feasible. The second phase was to analyse validity and test-retest reliability of the primary CPET parameters obtained from the RATT in 18 healthy able-bodied subjects and to compare the outcomes to those obtained from standard exercise testing devices (a cycle ergometer and a treadmill). The results demonstrated that peak oxygen uptake (V'O2peak) and oxygen uptake at the submaximal exercise thresholds on the RATT were ̴20% lower than for the cycle ergometer and ̴30% lower than on the treadmill. A very high correlation was found between the RATT vs the cycle ergometer V'O2peak and the RATT vs the treadmill V'O2peak. Test-retest reliability of CPET parameters obtained from the RATT were similarly high to those for standard exercise testing devices. These findings suggested that the RATT is a valid and reliable device for CPET and that it has potential to be used in severely impaired patients. Thus, the third phase was to investigate using the RATT for CPET and exercise training in 8 severely disabled stroke patients. The method was technically implementable, well tolerated by the patients, and substantial cardiopulmonary responses were observed. Additionally, all patients could exercise at the recommended training intensity for 10 min bouts. Finally, an investigation of test-retest reliability and four-week changes in cardiopulmonary fitness was carried out in 17 stroke patients with various degrees of disability. Good to excellent test-retest reliability and repeatability were found for the main CPET variables. There was no significant difference in most CPET parameters over four weeks. In conclusion, based on the demonstrated validity, reliability and repeatability, the RATT was found to be a feasible and appropriate alternative exercise testing and training device for patients who have limitations for use of standard devices.
Resumo:
Diagnosis of primary ciliary dyskinesia (PCD) lacks a "gold standard" test and is therefore based on combinations of tests including nasal nitric oxide (nNO), high-speed video microscopy analysis (HSVMA), genotyping and transmission electron microscopy (TEM). There are few published data on the accuracy of this approach.Using prospectively collected data from 654 consecutive patients referred for PCD diagnostics we calculated sensitivity and specificity for individual and combination testing strategies. Not all patients underwent all tests.HSVMA had excellent sensitivity and specificity (100% and 93%, respectively). TEM was 100% specific, but 21% of PCD patients had normal ultrastructure. nNO (30 nL·min(-1) cut-off) had good sensitivity and specificity (91% and 96%, respectively). Simultaneous testing using HSVMA and TEM was 100% sensitive and 92% specific.In conclusion, combination testing was found to be a highly accurate approach for diagnosing PCD. HSVMA alone has excellent accuracy, but requires significant expertise, and repeated sampling or cell culture is often needed. TEM alone is specific but misses 21% of cases. nNO (≤30 nL·min(-1)) contributes well to the diagnostic process. In isolation nNO screening at this cut-off would miss ∼10% of cases, but in combination with HSVMA could reduce unnecessary further testing. Standardisation of testing between centres is a future priority.
Resumo:
Wide spread and continuing use of multiple-choice testing in technical subjects is leading to a mindset amongst students which is antithetical with actual use of intellect.
Resumo:
Noro virus, a positive single stranded RNA virus has been identified as a major etiologic agent in food borne gastroenteritis and diarrheal diseases. The emergence of this organism as a major non-bacterial cause in such outbreaks is partly due to the improved diagnostic tools like Reverse Transcription Polymerase chain reaction (RTPCR) that enable its detection. Noro virus accounts for nearly 96% of non-bacterial gastroenteritis outbreaks in US (1). Travelers' Diarrhea (TD) has remained a constant public health risk in the developed nations for decades and bacteria like Entero toxigenic Escherichia coli, Entero aggregative Escherichia coli have been described as the main etiologic agents for TD (2-4). A possible viral contribution to TD has been discovered in two studies (5, 6). The current study was designed to determine the prevalence of Noro virus in a population of 107 US students with TD acquired in Mexico in 2005 and to compare the prevalence to the prevalence of Noro virus in a similar study done in 2004. This study involved the testing of clinical stool specimens from 107 subjects in 2005 for the presence of Noro virus using RTPCR. The prevalence of Noro virus in 2004 used for comparison to 2005 data was obtained from published data (5). All subjects were recruited as TD subjects in a randomized, double-blinded clinical trial comparing a standard three day dosing of Rifaximin with and without an anti motility drug Loperamide. The prevalence of Noro virus geno group I was similar in both years, but geno group II prevalence differed across the two years (p = 0.003). This study finding suggests that the prevalence of Noro virus geno groups varies with time even within a specific geographic location. This study emphasizes the need for further systematic epidemiologic studies to determine the molecular epidemiology and the prevalence patterns of different geno groups of this virus. These are essential to planning and implementation of public health measures to lessen the burden of TD due to Noro virus infection among US travelers. ^
Resumo:
Objective. The purpose of this study was to examine the association of perceived stress and passing the fitness test in a cohort of Department of Defense active duty members. Reports of this association have been suggested in numerous articles. Methods. The 2005 DoD Survey of Health Related Behaviors Among Active Duty Military Personnel was used to examine the association between the participants’ perceived levels of stress from family and/or work related sources and the respondents’ last required fitness test taking into account potential confounder of the association. Measures of association were obtained from logistic regression models. Results. Participants who experienced “some” or “a lot” of stress either from work sources (OR 0.69, 95% CI: 0.58-0.87) or from personal/family sources (OR 0.70, 95% CI: 0.57-0.86) were less likely to pass the fitness test when compared to their counterparts who experienced “none” or “a little” stress. Additionally, those who reported “some” or “a lot” of stress either from work sources (OR 0.54, 95% CI: 0.41-0.70) or from personal/family sources (OR 0.54, 95% CI: 0.44-0.67) that interfered with their military duties were also less likely to pass the fitness test. The multivariate adjustment only slightly reduced the unadjusted association. Conclusions . An association exists between perceived stress levels and outcome of fitness testing. The higher the level of stress perceived, the less likely the person will be to pass the fitness test. Stress-related intervention might be useful to help the military members to achieve the level of fitness needed to perform their duties.^
Resumo:
Multiple Endocrine Neoplasia type 1 (MEN1) is a hereditary cancer syndrome characterized by tumors of the endocrine system. Tumors most commonly develop in the parathyroid glands, pituitary gland, and the gastro-entero pancreatic tract. MEN1 is a highly penetrant condition and age of onset is variable. Most patients are diagnosed in early adulthood; however, rare cases of MEN1 present in early childhood. Expert consensus opinion is that predictive genetic testing should be offered at age 5 years, however there are no evidence-based studies that clearly establish that predictive genetic testing at this age would be beneficial since most symptoms do not present until later in life. This study was designed to explore attitudes about the most appropriate age for predictive genetic testing from individuals at risk of having a child with MEN1. Participants who had an MEN1 mutation were invited to complete a survey and were asked to invite their spouses to participate as well. The survey included several validated measures designed to assess participants’ attitudes about predictive testing in minors. Fifty-eight affected participants and twenty-two spouses/partners completed the survey. Most participants felt that MEN1 genetic testing was appropriate in healthy minors. Younger age and increased knowledge of MEN1 genetics and inheritance predicted genetic testing at a younger age. Additionally, participants who saw more positive than negative general outcomes from genetic testing were more likely to favor genetic testing at younger ages. Overall, participants felt genetic testing should be offered at a younger age than most adult onset conditions and most felt the appropriate time for testing was when a child could understand and participate in the testing process. Psychological concerns seemed to be the primary focus of participants who favored later ages for genetic testing, while medical benefits were more commonly cited for younger age. This exploratory study has implications for counseling patients whose children are at risk of developing MEN1 and illustrates issues that are important to patients and their spouses when considering testing in children.
Resumo:
BACKGROUND. The development of interferon-gamma release assays (IGRA) has introduced powerful tools in diagnosing latent tuberculosis infection (LTBI) and may play a critical role in the future of tuberculosis diagnosis. However, there have been reports of high indeterminate results in young patient populations (0-18 years). This study investigated results of the QunatiFERON-TB Gold In-Tube (QFT-GIT) IGRA in a population of children (0-18 years) at Texas Children's Hospital in association with specimen collection procedures using surrogate variables. ^ METHODS. A retrospective case-control study design was used for this investigation. Cases were defined as having QFT-GIT indeterminate results. Controls were defined as having either positive or negative results (determinates). Patients' admission status, staff performing specimen collection, and specific nurse performing specimen collection were used as surrogates to measure specimen collection procedures. ^ To minimize potential confounding, abstraction of patients' electronic medical records was performed. Abstracted data included patients' medications and evaluation at the time of QFT-GIT specimen collection in addition to their medical history. QFT-GIT related data was also abstracted. Cases and controls were characterized using chi-squared tests or Fisher's exact tests across categorical variables. Continuous variables were analyzed using one-way ANOVA and t-tests for continuous variables. A multivariate model was constructed by backward stepwise removal of statistically significant variables from univariate analysis. ^ RESULTS. Patient data was abstracted from 182 individuals aged 0-18 years from July 2010 to August 2011 at Texas Children's Hospital. 56 cases (indeterminates) and 126 controls (determinates) were enrolled. Cancer was found to be an effect modifier with subsequent stratification resulting in a cancer patient population too small to analyze (n=13). Subsequent analyses excluded these patients. ^ The exclusion of cancer patients resulted in a population of 169 patients with 49 indeterminates (28.99%) and 120 determinates (71.01%), with mean ages of 9.73 (95% CI: 8.03, 11.43) years and 11.66 (95% CI: 10.75, 12.56) years (p = 0.033), respectively. Median age of patients who were indeterminates and determinates were 12.37 and 12.87 years, respectively. Lack of data for our specific nurse surrogate (QFTNurse) resulted in its exclusion from analysis. The final model included only our remaining surrogate variables (QFTStaff and QFTInpatientOutpatient). The staff collecting surrogate (QFTStaff) was found to be modestly associated with indeterminates when nurses collected the specimen (OR = 1.54, 95% CI: 0.51, 4.64, p = 0.439) in the final model. Inpatients were found to have a strong and statistically significant association with indeterminates (OR = 11.65, 95% CI: 3.89, 34.9, p < 0.001) in the final model. ^ CONCLUSION. Inpatient status was used as a surrogate for indication of nurse drawn blood specimens. Nurses have had little to no training regarding shaking of tubes versus phlebotomists regarding QFT-GIT testing procedures. This was also measured by two other surrogates; specifically a medical note stating whether a nurse or phlebotomist collected the specimen (QFTStaff) and the name and title of the specific nurse if collection was performed by a nurse (QFTNurse). Results indicated that inpatient status was a strong and statistically significant factor for indeterminates, however, nurse collected specimens and indeterminate results had no statistically significant association in non-cancer patients. The lack of data denoting the specific nurse performing specimen collection excluded the QFTNurse surrogate in our analysis. ^ Findings suggests training of staff personnel in specimen procedures may have little effect on the number of indeterminates while inpatient status and thus possibly illness severity may be the most important factor for indeterminate results in this population. The lack of congruence between our surrogate measures may imply that our inpatient surrogate gauged illness severity rather than collection procedures as intended. ^ Despite the lack of clear findings, our analysis indicated that more than half of indeterminates were found in specimens drawn by nurses and as such staff training may be explored. Future studies may explore methods in measuring modifiable variables during pre-analytical QFT-GIT procedures that can be discerned and controlled. Identification of such measures may provide insight into ways to lowering indeterminate QFT-GIT rates in children.^
Resumo:
Self-management is being promoted in cystic fibrosis (CF). However, it has not been well studied. Principal aims of this research were (1) to evaluate psychometric properties of a CF disease status measure, the NIH Clinical Score; (2) to develop and validate a measure of self-management behavior, the SMQ-CF scale, and (3) to examine the relation between self-management and disease status in CF patients over two years.^ In study 1, NIH Clinical Scores for 200 patients were used. The scale was examined for internal consistency, interrater reliability, and content validity using factor analysis. The Cronbach's alpha (.81) and interrater reliability (.90) for the total scale were high. General scale items were less reliable. Factor analysis indicated that most of the variance in disease status is accounted for by Factor 1 which consists of pulmonary disease items.^ The SMQ-CF measures the performance of CF self-management. Pilot testing was done with 98 CF primary caregivers. Internal consistency reliability, social desirability bias, and content validity using factor analysis were examined. Internal consistency was good (alpha =.95). Social desirability correlation was low (r =.095). Twelve factors identified were consistent with conceptual groupings of behaviors. Around two hundred caregivers from two CF centers were surveyed and multivariate analysis of variance was used to assess construct validity. Results confirmed expected relations between self-management, patient age, and disease status. Patient age accounted for 50% and disease status 18% of the variance in the SMQ-CF scale.^ It was hypothesized that self-management would positively affect future disease status. Data from 199 CF patients (control and education intervention groups) were examined. Models of hypothesized relations were tested using LISREL structural equation modeling. Results indicated that the relations between baseline self-management and Time 1 disease status were not significant. Significant relations were observed in self-management behaviors from time 1 to time 2 and patterns of significant relations differed between the two groups.^ This research has contributed to refinements in the ability to measure self-management behavior and disease status outcomes in cystic fibrosis. In addition, it provides the first steps in exploratory behavioral analysis with regard to self-management in this disease. ^
Resumo:
This paper explores whether a worker's unwillingness to make his/her HIV-positive status or test-taking experience known by colleagues impedes his/her decision to test for HIV. After analyzing the new survey data provided by employees working for a large multinational enterprise in South Africa (2009-2010), this study finds that this unwillingness is negatively associated with test-taking (at the enterprise's on-site clinic) of workers who are extensively networked with close colleagues (i.e., know their phone numbers). It appears that the expected disutility associated with HIV/AIDS-related stigma prohibits test uptake. When introducing HIV counseling and testing programs into a corporate sector, providing all workers with an excuse to test in the workplace and/or inducing them to privately test outside the workplace may be effective in encouraging the uptake.
Resumo:
The critical conditions for hydrogenembrittlement (HE) risk of highstrengthgalvanizedsteel (HSGS) wires and tendons exposed to alkaline concrete pore solutions have been evaluated by means of electrochemical and mechanical testing. There is a relationship between the hydrogenembrittlementrisk in HSGS and the length of hydrogen evolution process in alkalinemedia. The galvanizedsteel suffers anodic dissolution simultaneously to the hydrogen evolution which does not stop until the passivation process is completed. HSGS wires exposed to a very highalkalinemedia have showed HE risk with loss in mechanical properties only if long periods with hydrogen evolution process take place with a simultaneous intensive galvanized coating reduction.
Resumo:
The application of Rheology to study biological systems is a new and very extensive matter, in which melon is absolutely unknown. The goal of this work is to determine some physical characteristics of this fruit, immediately after harvest and during its conservation in cold storage. Portugal and Spain are the most interested countries in these studies, as they are important producers of melon. The varieties Branco da Leziria and Piel de sapo were chosen because they are the most popular in both countries. The fruit were studied on the day they were harvested, and then were conserved in cold storage in the "Instituto del Frio" in Madrid, and they were periodically tested again. Thus during seven days the same fruits, and new fruits, were picked up and tested. On the first day of testing we had 20 fruits to study and at the end of the testing period we had used 80 fruits. The results from the non-destructive impact test were very significant and they may contribute to standardise methods to measure fruit maturity. These results were confirmed by those obtained from compression tests. The results obtained during the Impact tests with melon were similar to those obtained previously with other fruits. There is a close relationship between the results of the Impact tests and Compression tests. Tests like Impact and Compression can be adapted to melon, varieties 'Piel de Sapo" and 'Branco de Leziria', allowing us to continue further work with this species. The great number of data obtained during performance of the tests allowed us to go on with this work and to contribute to standardise methods of measurement and expression of characteristics of a new biological product. During the "Impact damage in fruits and vegetables" workshop, held in Zaragoza in 1990, these matters were included in the priority list.
Resumo:
Studies of patients with temporal lobe epilepsy provide few descriptions of seizures that arise in the temporopolar and the anterior temporobasal brain region. Based on connectivity, it might be assumed that the semiology of these seizures is similar to that of medial temporal lobe epilepsy. However, accumulating evidence suggests that the anterior temporobasal cortex may play an important role in the language system, which could account for particular features of seizures arising here. We studied the electroclinical features of seizures in patients with circumscribed temporopolar and temporobasal lesions in order to identify specific features that might differentiate them from seizures that originate in other temporal areas. Among 172 patients with temporal lobe seizures registered in our epilepsy unit in the last 15 years, 15 (8.7%) patients had seizures caused by temporopolar or anterior temporobasal lesions (11 left-sided lesions). The main finding in our study is that patients with left-sided lesions had aphasia during their seizures as the most prominent feature. In addition, while all patients showed normal to high intellectual functioning in standard neuropsychological testing, semantic impairment was found in a subset of 9 patients with left-sided lesions. This case series demonstrates that aphasic seizures without impairment of consciousness can result from small, circumscribed left anterior temporobasal and temporopolar lesions. Thus, the presence of speech manifestation during seizures should prompt detailed assessment of the structural integrity of the basal surface of the temporal lobe in addition to the evaluation of primary language areas.
Resumo:
Over the past few decades, significant scientific progress has influenced clinical allergy practice. The biological standardization of extracts was followed by the massive identification and characterization of new allergens and their progressive use as diagnostic tools including allergen micro arrays that facilitate the simultaneous testing of more than 100 allergen components. Specific diagnosis is the basis of allergy practice and is always aiming to select the best therapeutic or avoidance intervention. As a consequence, redundant or irrelevant information might be adding unnecessary cost and complexity to daily clinical practice. A rational use of the different diagnostic alternatives would allow a significant improvement in the diagnosis and treatment of allergic patients, especially for those residing in complex pollen exposure areas.
Resumo:
Several lines of evidence indicate that a modest increase in circulating glucose levels enhances memory. One mechanism underlying glucose effects on memory may be an increase in acetylcholine (ACh) release. The present experiment determined whether enhancement of spontaneous alternation performance by systemic glucose treatment is related to an increase in hippocampal ACh output. Samples of extracellular ACh were assessed at 12-min intervals using in vivo microdialysis with HPLC-EC. Twenty-four minutes after an intraperitoneal injection of saline or glucose (100, 250, or 1000 mg/kg), rats were tested in a four-arm cross maze for spontaneous alternation behavior combined with microdialysis collection. Glucose at 250 mg/kg, but not 100 or 1000 mg/kg, produced an increase in spontaneous alternation scores (69.5%) and ACh output (121.5% versus baseline) compared to alternation scores (44.7%) and ACh output (58.9% versus baseline) of saline controls. The glucose-induced increase in alternation scores and ACh output was not secondary to changes in locomotor activity. Saline and glucose (100-1000 mg/kg) treatment had no effect on hippocampal ACh output when rats remained in the holding chamber. These findings suggest that glucose may enhance memory by directly or indirectly increasing the release of ACh. The results also indicate that hippocampal ACh release is increased in rats performing a spatial task. Moreover, because glucose enhanced ACh output only during behavioral testing, circulating glucose may modulate ACh release only under conditions in which cholinergic cells are activated.