875 resultados para requirement-based testing
Resumo:
The aim of this paper is to present a new class of smoothness testing strategies in the context of hp-adaptive refinements based on continuous Sobolev embeddings. In addition to deriving a modified form of the 1d smoothness indicators introduced in [26], they will be extended and applied to a higher dimensional framework. A few numerical experiments in the context of the hp-adaptive FEM for a linear elliptic PDE will be performed.
Resumo:
In this paper, we propose a new method for stitching multiple fluoroscopic images taken by a C-arm instrument. We employ an X-ray radiolucent ruler with numbered graduations while acquiring the images, and the image stitching is based on detecting and matching ruler parts in the images to the corresponding parts of a virtual ruler. To achieve this goal, we first detect the regular spaced graduations on the ruler and the numbers. After graduation labeling, for each image, we have the location and the associated number for every graduation on the ruler. Then, we initialize the panoramic X-ray image with the virtual ruler, and we “paste” each image by aligning the detected ruler part on the original image, to the corresponding part of the virtual ruler on the panoramic image. Our method is based on ruler matching but without the requirement of matching similar feature points in pairwise images, and thus, we do not necessarily require overlap between the images. We tested our method on eight different datasets of X-ray images, including long bones and a complete spine. Qualitative and quantitative experiments show that our method achieves good results.
Resumo:
In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor, and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intraocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper, we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.
Resumo:
OBJECTIVE To systematically review evidence on genetic variants influencing outcomes during warfarin therapy and provide practice recommendations addressing the key questions: (1) Should genetic testing be performed in patients with an indication for warfarin therapy to improve achievement of stable anticoagulation and reduce adverse effects? (2) Are there subgroups of patients who may benefit more from genetic testing compared with others? (3) How should patients with an indication for warfarin therapy be managed based on their genetic test results? METHODS A systematic literature search was performed for VKORC1 and CYP2C9 and their association with warfarin therapy. Evidence was critically appraised, and clinical practice recommendations were developed based on expert group consensus. RESULTS Testing of VKORC1 (-1639G>A), CYP2C9*2, and CYP2C9*3 should be considered for all patients, including pediatric patients, within the first 2 weeks of therapy or after a bleeding event. Testing for CYP2C9*5, *6, *8, or *11 and CYP4F2 (V433M) is currently not recommended. Testing should also be considered for all patients who are at increased risk of bleeding complications, who consistently show out-of-range international normalized ratios, or suffer adverse events while receiving warfarin. Genotyping results should be interpreted using a pharmacogenetic dosing algorithm to estimate the required dose. SIGNIFICANCE This review provides the latest update on genetic markers for warfarin therapy, clinical practice recommendations as a basis for informed decision making regarding the use of genotype-guided dosing in patients with an indication for warfarin therapy, and identifies knowledge gaps to guide future research.
Resumo:
BACKGROUND Recommendations have differed nationally and internationally with respect to the best time to start antiretroviral therapy (ART). We compared effectiveness of three strategies for initiation of ART in high-income countries for HIV-positive individuals who do not have AIDS: immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL. METHODS We used data from the HIV-CAUSAL Collaboration of cohort studies in Europe and the USA. We included 55 826 individuals aged 18 years or older who were diagnosed with HIV-1 infection between January, 2000, and September, 2013, had not started ART, did not have AIDS, and had CD4 count and HIV-RNA viral load measurements within 6 months of HIV diagnosis. We estimated relative risks of death and of death or AIDS-defining illness, mean survival time, the proportion of individuals in need of ART, and the proportion of individuals with HIV-RNA viral load less than 50 copies per mL, as would have been recorded under each ART initiation strategy after 7 years of HIV diagnosis. We used the parametric g-formula to adjust for baseline and time-varying confounders. FINDINGS Median CD4 count at diagnosis of HIV infection was 376 cells per μL (IQR 222-551). Compared with immediate initiation, the estimated relative risk of death was 1·02 (95% CI 1·01-1·02) when ART was started at a CD4 count less than 500 cells per μL, and 1·06 (1·04-1·08) with initiation at a CD4 count less than 350 cells per μL. Corresponding estimates for death or AIDS-defining illness were 1·06 (1·06-1·07) and 1·20 (1·17-1·23), respectively. Compared with immediate initiation, the mean survival time at 7 years with a strategy of initiation at a CD4 count less than 500 cells per μL was 2 days shorter (95% CI 1-2) and at a CD4 count less than 350 cells per μL was 5 days shorter (4-6). 7 years after diagnosis of HIV, 100%, 98·7% (95% CI 98·6-98·7), and 92·6% (92·2-92·9) of individuals would have been in need of ART with immediate initiation, initiation at a CD4 count less than 500 cells per μL, and initiation at a CD4 count less than 350 cells per μL, respectively. Corresponding proportions of individuals with HIV-RNA viral load less than 50 copies per mL at 7 years were 87·3% (87·3-88·6), 87·4% (87·4-88·6), and 83·8% (83·6-84·9). INTERPRETATION The benefits of immediate initiation of ART, such as prolonged survival and AIDS-free survival and increased virological suppression, were small in this high-income setting with relatively low CD4 count at HIV diagnosis. The estimated beneficial effect on AIDS is less than in recently reported randomised trials. Increasing rates of HIV testing might be as important as a policy of early initiation of ART. FUNDING National Institutes of Health.
Resumo:
BACKGROUND Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) are the most frequent causes of bacterial sexually transmitted infections (STIs). Management strategies that reduce losses in the clinical pathway from infection to cure might improve STI control and reduce complications resulting from lack of, or inadequate, treatment. OBJECTIVES To assess the effectiveness and safety of home-based specimen collection as part of the management strategy for Chlamydia trachomatis and Neisseria gonorrhoeae infections compared with clinic-based specimen collection in sexually-active people. SEARCH METHODS We searched the Cochrane Sexually Transmitted Infections Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and LILACS on 27 May 2015, together with the World Health Organization International Clinical Trials Registry (ICTRP) and ClinicalTrials.gov. We also handsearched conference proceedings, contacted trial authors and reviewed the reference lists of retrieved studies. SELECTION CRITERIA Randomized controlled trials (RCTs) of home-based compared with clinic-based specimen collection in the management of C. trachomatis and N. gonorrhoeae infections. DATA COLLECTION AND ANALYSIS Three review authors independently assessed trials for inclusion, extracted data and assessed risk of bias. We contacted study authors for additional information. We resolved any disagreements through consensus. We used standard methodological procedures recommended by Cochrane. The primary outcome was index case management, defined as the number of participants tested, diagnosed and treated, if test positive. MAIN RESULTS Ten trials involving 10,479 participants were included. There was inconclusive evidence of an effect on the proportion of participants with index case management (defined as individuals tested, diagnosed and treated for CT or NG, or both) in the group with home-based (45/778, 5.8%) compared with clinic-based (51/788, 6.5%) specimen collection (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.60 to 1.29; 3 trials, I² = 0%, 1566 participants, moderate quality). Harms of home-based specimen collection were not evaluated in any trial. All 10 trials compared the proportions of individuals tested. The results for the proportion of participants completing testing had high heterogeneity (I² = 100%) and were not pooled. We could not combine data from individual studies looking at the number of participants tested because the proportions varied widely across the studies, ranging from 30% to 96% in home group and 6% to 97% in clinic group (low-quality evidence). The number of participants with positive test was lower in the home-based specimen collection group (240/2074, 11.6%) compared with the clinic-based group (179/967, 18.5%) (RR 0.72, 95% CI 0.61 to 0.86; 9 trials, I² = 0%, 3041 participants, moderate quality). AUTHORS' CONCLUSIONS Home-based specimen collection could result in similar levels of index case management for CT or NG infection when compared with clinic-based specimen collection. Increases in the proportion of individuals tested as a result of home-based, compared with clinic-based, specimen collection are offset by a lower proportion of positive results. The harms of home-based specimen collection compared with clinic-based specimen collection have not been evaluated. Future RCTs to assess the effectiveness of home-based specimen collection should be designed to measure biological outcomes of STI case management, such as proportion of participants with negative tests for the relevant STI at follow-up.
Resumo:
We examined whether self-esteem and narcissism predict the occurrence of stressful life events (i.e., selection) and whether stressful life events predict change in self-esteem and narcissism (i.e., socialization). The analyses were based on longitudinal data from 2 studies, including samples of 328 young adults (Study 1) and 371 adults (Study 2). The effects of self-esteem and narcissism were mutually controlled for each other and, moreover, controlled for effects of depression. After conducting the study-level analyses, we meta-analytically aggregated the findings. Self-esteem had a selection effect, suggesting that low self-esteem led to the occurrence of stressful life events; however, this effect became nonsignificant when depression was controlled for. Regardless of whether depression was controlled for or not, narcissism had a selection effect, suggesting that high narcissism led to the occurrence of stressful life events. Moreover, stressful life events had a socialization effect on self-esteem, but not on narcissism, suggesting that the occurrence of stressful life events decreased self-esteem. Analyses of trait–state models indicated that narcissism consisted almost exclusively of perfectly stable trait variance, providing a possible explanation for the absence of socialization effects on narcissism. The findings have significant implications because they suggest that a person’s level of narcissism influences whether stressful life events occur, and that self-esteem is shaped by the occurrence of stressful life events. Moreover, we discuss the possibility that depression mediates the selection effect of low self-esteem on stressful life events.
Resumo:
We present a real-world staff-assignment problem that was reported to us by a provider of an online workforce scheduling software. The problem consists of assigning employees to work shifts subject to a large variety of requirements related to work laws, work shift compatibility, workload balancing, and personal preferences of employees. A target value is given for each requirement, and all possible deviations from these values are associated with acceptance levels. The objective is to minimize the total number of deviations in ascending order of the acceptance levels. We present an exact lexicographic goal programming MILP formulation and an MILP-based heuristic. The heuristic consists of two phases: in the first phase a feasible schedule is built and in the second phase parts of the schedule are iteratively re-optimized by applying an exact MILP model. A major advantage of such MILP-based approaches is the flexibility to account for additional constraints or modified planning objectives, which is important as the requirements may vary depending on the company or planning period. The applicability of the heuristic is demonstrated for a test set derived from real-world data. Our computational results indicate that the heuristic is able to devise optimal solutions to non-trivial problem instances, and outperforms the exact lexicographic goal programming formulation on medium- and large-sized problem instances.
Resumo:
OBJECTIVES To assess the presence of within-group comparisons with baseline in a subset of leading dental journals and to explore possible associations with a range of study characteristics including journal and study design. STUDY DESIGN AND SETTING Thirty consecutive issues of five leading dental journals were electronically searched. The conduct and reporting of statistical analysis in respect of comparisons against baseline or otherwise along with the manner of interpretation of the results were assessed. Descriptive statistics were obtained, and chi-square test and Fisher's exact were undertaken to test the association between trial characteristics and overall study interpretation. RESULTS A total of 184 studies were included with the highest proportion published in Journal of Endodontics (n = 84, 46%) and most involving a single center (n = 157, 85%). Overall, 43 studies (23%) presented interpretation of their outcomes based solely on comparisons against baseline. Inappropriate use of baseline testing was found to be less likely in interventional studies (P < 0.001). CONCLUSION Use of comparisons with baseline appears to be common among both observational and interventional research studies in dentistry. Enhanced conduct and reporting of statistical tests are required to ensure that inferences from research studies are appropriate and informative.
Resumo:
BACKGROUND AND OBJECTIVES: The biased interpretation of ambiguous social situations is considered a maintaining factor of Social Anxiety Disorder (SAD). Studies on the modification of interpretation bias have shown promising results in laboratory settings. The present study aims at pilot-testing an Internet-based training that targets interpretation and judgmental bias. METHOD: Thirty-nine individuals meeting diagnostic criteria for SAD participated in an 8-week, unguided program. Participants were presented with ambiguous social situations, were asked to choose between neutral, positive, and negative interpretations, and were required to evaluate costs of potential negative outcomes. Participants received elaborate automated feedback on their interpretations and judgments. RESULTS: There was a pre-to-post-reduction of the targeted cognitive processing biases (d = 0.57-0.77) and of social anxiety symptoms (d = 0.87). Furthermore, results showed changes in depression and general psychopathology (d = 0.47-0.75). Decreases in cognitive biases and symptom changes did not correlate. The results held stable accounting for drop-outs (26%) and over a 6-week follow-up period. Forty-five percent of the completer sample showed clinical significant change and almost half of the participants (48%) no longer met diagnostic criteria for SAD. LIMITATIONS: As the study lacks a control group, results lend only preliminary support to the efficacy of the intervention. Furthermore, the mechanism of change remained unclear. CONCLUSION: First results promise a beneficial effect of the program for SAD patients. The treatment proved to be feasible and acceptable. Future research should evaluate the intervention in a randomized-controlled setting.
Resumo:
BACKGROUND: This study focused on the descriptive analysis of cattle movements and farm-level parameters derived from cattle movements, which are considered to be generically suitable for risk-based surveillance systems in Switzerland for diseases where animal movements constitute an important risk pathway. METHODS: A framework was developed to select farms for surveillance based on a risk score summarizing 5 parameters. The proposed framework was validated using data from the bovine viral diarrhoea (BVD) surveillance programme in 2013. RESULTS: A cumulative score was calculated per farm, including the following parameters; the maximum monthly ingoing contact chain (in 2012), the average number of animals per incoming movement, use of mixed alpine pastures and the number of weeks in 2012 a farm had movements registered. The final score for the farm depended on the distribution of the parameters. Different cut offs; 50, 90, 95 and 99%, were explored. The final scores ranged between 0 and 5. Validation of the scores against results from the BVD surveillance programme 2013 gave promising results for setting the cut off for each of the five selected farm level criteria at the 50th percentile. Restricting testing to farms with a score ≥ 2 would have resulted in the same number of detected BVD positive farms as testing all farms, i.e., the outcome of the 2013 surveillance programme could have been reached with a smaller survey. CONCLUSIONS: The seasonality and time dependency of the activity of single farms in the networks requires a careful assessment of the actual time period included to determine farm level criteria. However, selecting farms in the sample for risk-based surveillance can be optimized with the proposed scoring system. The system was validated using data from the BVD eradication program. The proposed method is a promising framework for the selection of farms according to the risk of infection based on animal movements.
Resumo:
Due to the lack of exercise testing devices that can be employed in stroke patients with severe disability, the aim of this PhD research was to investigate the clinical feasibility of using a robotics-assisted tilt table (RATT) as a method for cardiopulmonary exercise testing (CPET) and exercise training in stroke patients. For this purpose, the RATT was augmented with force sensors, a visual feedback system and a work rate calculation algorithm. As the RATT had not been used previously for CPET, the first phase of this project focused on a feasibility study in 11 healthy able-bodied subjects. The results demonstrated substantial cardiopulmonary responses, no complications were found, and the method was deemed feasible. The second phase was to analyse validity and test-retest reliability of the primary CPET parameters obtained from the RATT in 18 healthy able-bodied subjects and to compare the outcomes to those obtained from standard exercise testing devices (a cycle ergometer and a treadmill). The results demonstrated that peak oxygen uptake (V'O2peak) and oxygen uptake at the submaximal exercise thresholds on the RATT were ̴20% lower than for the cycle ergometer and ̴30% lower than on the treadmill. A very high correlation was found between the RATT vs the cycle ergometer V'O2peak and the RATT vs the treadmill V'O2peak. Test-retest reliability of CPET parameters obtained from the RATT were similarly high to those for standard exercise testing devices. These findings suggested that the RATT is a valid and reliable device for CPET and that it has potential to be used in severely impaired patients. Thus, the third phase was to investigate using the RATT for CPET and exercise training in 8 severely disabled stroke patients. The method was technically implementable, well tolerated by the patients, and substantial cardiopulmonary responses were observed. Additionally, all patients could exercise at the recommended training intensity for 10 min bouts. Finally, an investigation of test-retest reliability and four-week changes in cardiopulmonary fitness was carried out in 17 stroke patients with various degrees of disability. Good to excellent test-retest reliability and repeatability were found for the main CPET variables. There was no significant difference in most CPET parameters over four weeks. In conclusion, based on the demonstrated validity, reliability and repeatability, the RATT was found to be a feasible and appropriate alternative exercise testing and training device for patients who have limitations for use of standard devices.
Resumo:
Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.
Resumo:
BACKGROUND: Bioluminescence imaging is widely used for cell-based assays and animal imaging studies, both in biomedical research and drug development. Its main advantages include its high-throughput applicability, affordability, high sensitivity, operational simplicity, and quantitative outputs. In malaria research, bioluminescence has been used for drug discovery in vivo and in vitro, exploring host-pathogen interactions, and studying multiple aspects of Plasmodium biology. While the number of fluorescent proteins available for imaging has undergone a great expansion over the last two decades, enabling simultaneous visualization of multiple molecular and cellular events, expansion of available luciferases has lagged behind. The most widely used bioluminescent probe in malaria research is the Photinus pyralis firefly luciferase, followed by the more recently introduced Click-beetle and Renilla luciferases. Ultra-sensitive imaging of Plasmodium at low parasite densities has not been previously achieved. With the purpose of overcoming these challenges, a Plasmodium berghei line expressing the novel ultra-bright luciferase enzyme NanoLuc, called PbNLuc has been generated, and is presented in this work. RESULTS: NanoLuc shows at least 150 times brighter signal than firefly luciferase in vitro, allowing single parasite detection in mosquito, liver, and sexual and asexual blood stages. As a proof-of-concept, the PbNLuc parasites were used to image parasite development in the mosquito, liver and blood stages of infection, and to specifically explore parasite liver stage egress, and pre-patency period in vivo. CONCLUSIONS: PbNLuc is a suitable parasite line for sensitive imaging of the entire Plasmodium life cycle. Its sensitivity makes it a promising line to be used as a reference for drug candidate testing, as well as the characterization of mutant parasites to explore the function of parasite proteins, host-parasite interactions, and the better understanding of Plasmodium biology. Since the substrate requirements of NanoLuc are different from those of firefly luciferase, dual bioluminescence imaging for the simultaneous characterization of two lines, or two separate biological processes, is possible, as demonstrated in this work.
Resumo:
OBJECTIVES Primary care physicians (PCPs) should prescribe faecal immunochemical testing (FIT) or colonoscopy for colorectal cancer screening based on their patient's values and preferences. However, there are wide variations between PCPs in the screening method prescribed. The objective was to assess the impact of an educational intervention on PCPs' intent to offer FIT or colonoscopy on an equal basis. DESIGN Survey before and after training seminars, with a parallel comparison through a mailed survey to PCPs not attending the training seminars. SETTING All PCPs in the canton of Vaud, Switzerland. PARTICIPANTS Of 592 eligible PCPs, 133 (22%) attended a seminar and 106 (80%) filled both surveys. 109 (24%) PCPs who did not attend the seminars returned the mailed survey. INTERVENTION A 2 h-long interactive seminar targeting PCP knowledge, skills and attitudes regarding offering a choice of colorectal cancer (CRC) screening options. OUTCOME MEASURES The primary outcome was PCP intention of having their patients screened with FIT and colonoscopy in equal proportions (between 40% and 60% each). Secondary outcomes were the perceived role of PCPs in screening decisions (from paternalistic to informed decision-making) and correct answer to a clinical vignette. RESULTS Before the seminars, 8% of PCPs reported that they had equal proportions of their patients screened for CRC by FIT and colonoscopy; after the seminar, 33% foresaw having their patients screened in equal proportions (p<0.001). Among those not attending, there was no change (13% vs 14%, p=0.8). Of those attending, there was no change in their perceived role in screening decisions, while the proportion responding correctly to a clinical vignette increased (88-99%, p<0.001). CONCLUSIONS An interactive training seminar increased the proportion of physicians with the intention to prescribe FIT and colonoscopy in equal proportions.