958 resultados para Test Management
Resumo:
We present results of a benchmark test evaluating the resource allocation capabilities of the project management software packages Acos Plus.1 8.2, CA SuperProject 5.0a, CS Project Professional 3.0, MS Project 2000, and Scitor Project Scheduler 8.0.1. The tests are based on 1560 instances of precedence– and resource–constrained project scheduling problems. For different complexity scenarios, we analyze the deviation of the makespan obtained by the software packages from the best feasible makespan known. Among the tested software packages, Acos Plus.1 and Project Scheduler show the best resource allocation performance. Moreover, our numerical analysis reveals a considerable performance gap between the implemented methods and state–of–the–art project scheduling algorithms, especially for large–sized problems. Thus, there is still a significant potential for improving solutions to resource allocation problems in practice.
Resumo:
The role of Soil Organic Carbon (SOC) in mitigating climate change, indicating soil quality and ecosystem function has created research interested to know the nature of SOC at landscape level. The objective of this study was to examine variation and distribution of SOC in a long-term land management at a watershed and plot level. This study was based on meta-analysis of three case studies and 128 surface soil samples from Ethiopia. Three sites (Gununo, Anjeni and Maybar) were compared after considering two Land Management Categories (LMC) and three types of land uses (LUT) in quasi-experimental design. Shapiro-Wilk tests showed non-normal distribution (p = 0.002, a = 0.05) of the data. SOC median value showed the effect of long-term land management with values of 2.29 and 2.38 g kg-1 for less and better-managed watersheds, respectively. SOC values were 1.7, 2.8 and 2.6 g kg-1 for Crop (CLU), Grass (GLU) and Forest Land Use (FLU), respectively. The rank order for SOC variability was FLU>GLU>CLU. Mann-Whitney U and Kruskal-Wallis test showed a significant difference in the medians and distribution of SOC among the LUT, between soil profiles (p<0.05, confidence interval 95%, a = 0.05) while it is not significant (p>0.05) for LMC. The mean and sum rank of Mann Whitney U and Kruskal Wallis test also showed the difference at watershed and plot level. Using SOC as a predictor, cross-validated correct classification with discriminant analysis showed 46 and 49% for LUT and LMC, respectively. The study showed how to categorize landscapes using SOC with respect to land management for decision-makers.
Resumo:
OBJECTIVE Hunger strikers resuming nutritional intake may develop a life-threatening refeeding syndrome (RFS). Consequently, hunger strikers represent a core challenge for the medical staff. The objective of the study was to test the effectiveness and safety of evidence-based recommendations for prevention and management of RFS during the refeeding phase. METHODS This was a retrospective, observational data analysis of 37 consecutive, unselected cases of prisoners on a hunger strike during a 5-y period. The sample consisted of 37 cases representing 33 individual patients. RESULTS In seven cases (18.9%), the hunger strike was continued during the hospital stay, in 16 episodes (43.2%) cessation of the hunger strike occurred immediately after admission to the security ward, and in 14 episodes (37.9%) during hospital stay. In the refeed cases (n = 30), nutritional replenishment occurred orally, and in 25 (83.3%) micronutrients substitutions were made based on the recommendations. The gradual refeeding with fluid restriction occurred over 10 d. Uncomplicated dyselectrolytemia was documented in 12 cases (40%) within the refeeding phase. One case (3.3%) presented bilateral ankle edemas as a clinical manifestation of moderate RFS. Intensive medical treatment was not necessary and none of the patients died. Seven episodes of continued hunger strike were observed during the entire hospital stay without medical complications. CONCLUSIONS Our data suggested that seriousness and rate of medical complications during the refeeding phase can be kept at a minimum in a hunger strike population. This study supported use of recommendations to optimize risk management and to improve treatment quality and patient safety in this vulnerable population.
Resumo:
BACKGROUND Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) are the most frequent causes of bacterial sexually transmitted infections (STIs). Management strategies that reduce losses in the clinical pathway from infection to cure might improve STI control and reduce complications resulting from lack of, or inadequate, treatment. OBJECTIVES To assess the effectiveness and safety of home-based specimen collection as part of the management strategy for Chlamydia trachomatis and Neisseria gonorrhoeae infections compared with clinic-based specimen collection in sexually-active people. SEARCH METHODS We searched the Cochrane Sexually Transmitted Infections Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and LILACS on 27 May 2015, together with the World Health Organization International Clinical Trials Registry (ICTRP) and ClinicalTrials.gov. We also handsearched conference proceedings, contacted trial authors and reviewed the reference lists of retrieved studies. SELECTION CRITERIA Randomized controlled trials (RCTs) of home-based compared with clinic-based specimen collection in the management of C. trachomatis and N. gonorrhoeae infections. DATA COLLECTION AND ANALYSIS Three review authors independently assessed trials for inclusion, extracted data and assessed risk of bias. We contacted study authors for additional information. We resolved any disagreements through consensus. We used standard methodological procedures recommended by Cochrane. The primary outcome was index case management, defined as the number of participants tested, diagnosed and treated, if test positive. MAIN RESULTS Ten trials involving 10,479 participants were included. There was inconclusive evidence of an effect on the proportion of participants with index case management (defined as individuals tested, diagnosed and treated for CT or NG, or both) in the group with home-based (45/778, 5.8%) compared with clinic-based (51/788, 6.5%) specimen collection (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.60 to 1.29; 3 trials, I² = 0%, 1566 participants, moderate quality). Harms of home-based specimen collection were not evaluated in any trial. All 10 trials compared the proportions of individuals tested. The results for the proportion of participants completing testing had high heterogeneity (I² = 100%) and were not pooled. We could not combine data from individual studies looking at the number of participants tested because the proportions varied widely across the studies, ranging from 30% to 96% in home group and 6% to 97% in clinic group (low-quality evidence). The number of participants with positive test was lower in the home-based specimen collection group (240/2074, 11.6%) compared with the clinic-based group (179/967, 18.5%) (RR 0.72, 95% CI 0.61 to 0.86; 9 trials, I² = 0%, 3041 participants, moderate quality). AUTHORS' CONCLUSIONS Home-based specimen collection could result in similar levels of index case management for CT or NG infection when compared with clinic-based specimen collection. Increases in the proportion of individuals tested as a result of home-based, compared with clinic-based, specimen collection are offset by a lower proportion of positive results. The harms of home-based specimen collection compared with clinic-based specimen collection have not been evaluated. Future RCTs to assess the effectiveness of home-based specimen collection should be designed to measure biological outcomes of STI case management, such as proportion of participants with negative tests for the relevant STI at follow-up.
Resumo:
OBJECTIVES Rates of TB/HIV coinfection and multi-drug resistant (MDR)-TB are increasing in Eastern Europe (EE). We aimed to study clinical characteristics, factors associated with MDR-TB and predicted activity of empiric anti-TB treatment at time of TB diagnosis among TB/HIV coinfected patients in EE, Western Europe (WE) and Latin America (LA). DESIGN AND METHODS Between January 1, 2011, and December 31, 2013, 1413 TB/HIV patients (62 clinics in 19 countries in EE, WE, Southern Europe (SE), and LA) were enrolled. RESULTS Significant differences were observed between EE (N = 844), WE (N = 152), SE (N = 164), and LA (N = 253) in the proportion of patients with a definite TB diagnosis (47%, 71%, 72% and 40%, p<0.0001), MDR-TB (40%, 5%, 3% and 15%, p<0.0001), and use of combination antiretroviral therapy (cART) (17%, 40%, 44% and 35%, p<0.0001). Injecting drug use (adjusted OR (aOR) = 2.03 (95% CI 1.00-4.09), prior anti-TB treatment (3.42 (1.88-6.22)), and living in EE (7.19 (3.28-15.78)) were associated with MDR-TB. Among 585 patients with drug susceptibility test (DST) results, the empiric (i.e. without knowledge of the DST results) anti-TB treatment included ≥3 active drugs in 66% of participants in EE compared with 90-96% in other regions (p<0.0001). CONCLUSIONS In EE, TB/HIV patients were less likely to receive a definite TB diagnosis, more likely to house MDR-TB and commonly received empiric anti-TB treatment with reduced activity. Improved management of TB/HIV patients in EE requires better access to TB diagnostics including DSTs, empiric anti-TB therapy directed at both susceptible and MDR-TB, and more widespread use of cART.
Resumo:
Various software packages for project management include a procedure for resource-constrained scheduling. In several packages, the user can influence this procedure by selecting a priority rule. However, the resource-allocation methods that are implemented in the procedures are proprietary information; therefore, the question of how the priority-rule selection impacts the performance of the procedures arises. We experimentally evaluate the resource-allocation methods of eight recent software packages using the 600 instances of the PSPLIB J120 test set. The results of our analysis indicate that applying the default rule tends to outperform a randomly selected rule, whereas applying two randomly selected rules tends to outperform the default rule. Applying a small set of more than two rules further improves the project durations considerably. However, a large number of rules must be applied to obtain the best possible project durations.
Resumo:
BACKGROUND AND OBJECTIVES The distinction of oral lichenoid reactions from oral lichen planus may be difficult in a clinical setting. Our aims were to ascertain the utility of patch testing to confirm the association of oral lichenoid reactions with dental restorations and to identify the benefits of replacement of restorations, primarily made of amalgam. METHODS Patients seen in an oral medicine unit over a 10-year period diagnosed with oral lichenoid reactions, with oral lichen planus resistant to treatment or with atypical lichenoid features were included in this study. All had been subjected to skin patch testing. Histopathology reports blinded to patch test results were scrutinized. Patch-test-positive subjects were advised to have their restorations replaced. All were followed up to determine disease resolution for at least 3 months thereafter. RESULTS Among 115 patients, 67.8% patients reacted positive to a dental material and nearly a quarter to mercury or amalgam. No correlation was found between pathology and skin patch testing results (P = 0.44). A total of 87 patients were followed up in clinic, and among 26 patch-test-positive patients who had their amalgam fillings replaced, moderate to complete resolution was noted in 81%. CONCLUSIONS Skin patch testing is a valuable tool to confirm clinically suspected oral lichenoid reactions. Pathology diagnoses of oral lichenoid reactions did not correlate with patch test results. Prospective studies are needed to ascertain that a clinically suspected oral lichenoid reaction with a positive patch test result may resolve after the replacement of amalgam fillings.
Resumo:
Structural characteristics of social networks have been recognized as important factors of effective natural resource governance. However, network analyses of natural resource governance most often remain static, even though governance is an inherently dynamic process. In this article, we investigate the evolution of a social network of organizational actors involved in the governance of natural resources in a regional nature park project in Switzerland. We ask how the maturation of a governance network affects bonding social capital and centralization in the network. Applying separable temporal exponential random graph modeling (STERGM), we test two hypotheses based on the risk hypothesis by Berardo and Scholz (2010) in a longitudinal setting. Results show that network dynamics clearly follow the expected trend toward generating bonding social capital but do not imply a shift toward less hierarchical and more decentralized structures over time. We investigate how these structural processes may contribute to network effectiveness over time.
Resumo:
The purpose of this research and development project was to develop a method, a design, and a prototype for gathering, managing, and presenting data about occupational injuries.^ State-of-the-art systems analysis and design methodologies were applied to the long standing problem in the field of occupational safety and health of processing workplace injuries data into information for safety and health program management as well as preliminary research about accident etiologies. The top-down planning and bottom-up implementation approach was utilized to design an occupational injury management information system. A description of a managerial control system and a comprehensive system to integrate safety and health program management was provided.^ The project showed that current management information systems (MIS) theory and methods could be applied successfully to the problems of employee injury surveillance and control program performance evaluation. The model developed in the first section was applied at The University of Texas Health Science Center at Houston (UTHSCH).^ The system in current use at the UTHSCH was described and evaluated, and a prototype was developed for the UTHSCH. The prototype incorporated procedures for collecting, storing, and retrieving records of injuries and the procedures necessary to prepare reports, analyses, and graphics for management in the Health Science Center. Examples of reports, analyses, and graphics presenting UTHSCH and computer generated data were included.^ It was concluded that a pilot test of this MIS should be implemented and evaluated at the UTHSCH and other settings. Further research and development efforts for the total safety and health management information systems, control systems, component systems, and variable selection should be pursued. Finally, integration of the safety and health program MIS into the comprehensive or executive MIS was recommended. ^
Resumo:
A census of 925 U.S. colleges and universities offering masters and doctorate degrees was conducted in order to study the number of elements of an environmental management system as defined by ISO 14001 possessed by small, medium and large institutions. A 30% response rate was received with 273 responses included in the final data analysis. Overall, the number of ISO 14001 elements implemented among the 273 institutions ranged from 0 to 16, with a median of 12. There was no significant association between the number of elements implemented among institutions and the size of the institution (p = 0.18; Kruskal-Wallis test) or among USEPA regions (p = 0.12; Kruskal-Wallis test). The proportion of U.S. colleges and universities that reported having implemented a structured, comprehensive environmental management system, defined by answering yes to all 16 elements, was 10% (95% C.I. 6.6%–14.1%); however 38% (95% C.I. 32.0%–43.8%) reported that they had implemented a structured, comprehensive environmental management system, while 30.0% (95% C.I. 24.7%–35.9%) are planning to implement a comprehensive environmental management system within the next five years. Stratified analyses were performed by institution size, Carnegie Classification and job title. ^ The Osnabruck model, and another under development by the South Carolina Sustainable Universities Initiative, are the only two environmental management system models that have been proposed specifically for colleges and universities, although several guides are now available. The Environmental Management System Implementation Model for U.S. Colleges and Universities developed is an adaptation of the ISO 14001 standard and USEPA recommendations and has been tailored to U.S. colleges and universities for use in streamlining the implementation process. In using this implementation model created for the U.S. research and academic setting, it is hoped that these highly specialized institutions will be provided with a clearer and more cost-effective path towards the implementation of an EMS and greater compliance with local, state and federal environmental legislation. ^
Resumo:
Sexually transmitted infections (STIs) are a major public health problem, and controlling their spread is a priority. According to the World Health Organization (WHO), there are 340 million new cases of treatable STIs among 15–49 year olds that occur yearly around the world (1). Infection with STIs can lead to several complications such as pelvic inflammatory disorder (PID), cervical cancer, infertility, ectopic pregnancy, and even death (1). Additionally, STIs and associated complications are among the top disease types for which healthcare is sought in developing nations (1), and according to the UNAIDS report, there is a strong connection between STIs and the sexual spread of HIV infection (2). In fact, it is estimated that the presence of an untreated STI can increase the likelihood of contracting and spreading HIV by a factor up to 10 (2). In addition, developing countries are poorer in resources and lack inexpensive and precise diagnostic laboratory tests for STIs, thereby exacerbating the problem. Thus, the WHO recommends syndromic management of STIs for delivering care where lab testing is scarce or unattainable (1). This approach utilizes the use of an easy to use algorithm to help healthcare workers recognize symptoms/signs so as to provide treatment for the likely cause of the syndrome. Furthermore, according to the WHO, syndromic management offers instant and legitimate treatment compared to clinical diagnosis, and that it is also more cost-effective for some syndromes over the use of laboratory testing (1). In addition, even though it has been shown that the vaginal discharge syndrome has low specificity for gonorrhea and Chlamydia and can lead to over treatment (1), this is the recommended way to manage STIs in developing nations. Thus, the purpose of this paper is to specifically address the following questions: is syndromic management working to lower the STI burden in developing nations? How effective is it, and should it still be recommended? To answer these questions, a systematic literature review was conducted to evaluate the current effectiveness of syndromic management in developing nations. This review examined published articles over the past 5 years that compared syndromic management to laboratory testing and had published sensitivity, specificity, and positive predicative value data. Focusing mainly on vaginal discharge, urethral discharge, and genital ulcer algorithms, it was seen that though syndromic management is more effective in diagnosing and treating urethral and genial ulcer syndromes in men, there still remains an urgent need to revise the WHO recommendations for managing STIs in developing nations. Current studies have continued to show decreased specificity, sensitivity and positive predicative values for the vaginal discharge syndrome, and high rates of asymptomatic infections and healthcare workers neglecting to follow guidelines limit the usefulness of syndromic management. Furthermore, though advocate d as cost-effective by the WHO, there is a cost incurred from treating uninfected people. Instead of improving this system, it is recommended that better and less expensive point of care and the development of rapid test diagnosis kits be the focus and method of diagnosis and treatment in developing nations for STI management. ^
Resumo:
Although the processes involved in rational patient targeting may be obvious for certain services, for others, both the appropriate sub-populations to receive services and the procedures to be used for their identification may be unclear. This project was designed to address several research questions which arise in the attempt to deliver appropriate services to specific populations. The related difficulties are particularly evident for those interventions about which findings regarding effectiveness are conflicting. When an intervention clearly is not beneficial (or is dangerous) to a large, diverse population, consensus regarding withholding the intervention from dissemination can easily be reached. When findings are ambiguous, however, conclusions may be impossible.^ When characteristics of patients likely to benefit from an intervention are not obvious, and when the intervention is not significantly invasive or dangerous, the strategy proposed herein may be used to identify specific characteristics of sub-populations which may benefit from the intervention. The identification of these populations may be used both in further informing decisions regarding distribution of the intervention and for purposes of planning implementation of the intervention by identifying specific target populations for service delivery.^ This project explores a method for identifying such sub-populations through the use of related datasets generated from clinical trials conducted to test the effectiveness of an intervention. The method is specified in detail and tested using the example intervention of case management for outpatient treatment of populations with chronic mental illness. These analyses were applied in order to identify any characteristics which distinguish specific sub-populations who are more likely to benefit from case management service, despite conflicting findings regarding its effectiveness for the aggregate population, as reported in the body of related research. However, in addition to a limited set of characteristics associated with benefit, the findings generated, a larger set of characteristics of patients likely to experience greater improvement without intervention. ^
Resumo:
Next to leisure, sport, and household activities, the most common activity resulting in medically consulted injuries and poisonings in the United States is work, with an estimated 4 million workplace related episodes reported in 2008 (U.S. Department of Health and Human Services, 2009). To address the risks inherent to various occupations, risk management programs are typically put in place that include worker training, engineering controls, and personal protective equipment. Recent studies have shown that such interventions alone are insufficient to adequately manage workplace risks, and that the climate in which the workers and safety program exist (known as the "safety climate") is an equally important consideration. The organizational safety climate is so important that many studies have focused on developing means of measuring it in various work settings. While safety climate studies have been reported for several industrial settings, published studies on assessing safety climate in the university work setting are largely absent. Universities are particularly unique workplaces because of the potential exposure to a diversity of agents representing both acute and chronic risks. Universities are also unique because readily detectable health and safety outcomes are relatively rare. The ability to measure safety climate in a work setting with rarely observed systemic outcome measures could serve as a powerful means of measure for the evaluation of safety risk management programs. ^ The goal of this research study was the development of a survey tool to measure safety climate specifically in the university work setting. The use of a standardized tool also allows for comparisons among universities throughout the United States. A specific study objective was accomplished to quantitatively assess safety climate at five universities across the United States. At five universities, 971 participants completed an online questionnaire to measure the safety climate. The average safety climate score across the five universities was 3.92 on a scale of 1 to 5, with 5 indicating very high perceptions of safety at these universities. The two lowest overall dimensions of university safety climate were "acknowledgement of safety performance" and "department and supervisor's safety commitment". The results underscore how the perception of safety climate is significantly influenced at the local level. A second study objective regarding evaluating the reliability and validity of the safety climate questionnaire was accomplished. A third objective fulfilled was to provide executive summaries resulting from the questionnaire to the participating universities' health & safety professionals and collect feedback on usefulness, relevance and perceived accuracy. Overall, the professionals found the survey and results to be very useful, relevant and accurate. Finally, the safety climate questionnaire will be offered to other universities for benchmarking purposes at the annual meeting of a nationally recognized university health and safety organization. The ultimate goal of the project was accomplished and was the creation of a standardized tool that can be used for measuring safety climate in the university work setting and can facilitate meaningful comparisons amongst institutions.^
Resumo:
Background: interventions that focus on improving eating habits, increasing physical activity, and reducing sedentary behaviors on weight status and body mass index percentile and z-scores in youths have not been well documented. This study aimed to determine the short and long term effects of a 2-week residential weight management summer camp program for youths on weight, BMI, BMI percentile, and BMI z-score. ^ Methods: A sample of 73 obese multiethnic 10-14 years old youths (11.9 ± 1.4) attended a weight management camp called Kamp K'aana for two weeks and completed a 12-month follow-up on height and weight. As part of Kamp K'aana, participants received a series of nutrition, physical activity and behavioral lessons and were on an 1800 kcal per day meal plan. Anthropometric measurements of height and weight were taken to calculate participants' BMI percentiles and z-scores. Paired t-tests, chi square test and ANCOVA, adjusting for age, gender, and ethnicity were used to assess changes in body weight, BMI, BMI percentiles and BMI z-scores pre to two-weeks post-camp and 12 months post-camp. ^ Results: Significant reductions in body weight of 3.6 ± 1.4 (P = 0.0000), BMI of 1.4 ± 0.54 (P = 0.0000), BMI percentile of 0.45 ± 0.06 (P = 0.0000), and BMI z-score of 0.1 ± 0.06 (P = 0.0000) were observed at the end of the camp. Significant reductions in BMI z-scores (P < 0.001) and BMI percentile (P < 0.001) were observed at the 12-month reunion when compared to pre- and two-weeks post camp data. There was a significant increase in weight and BMI (P = 0.0000) at the 12-month reunion when compared to pre and post camp measurements. ^ Conclusion: Kamp K'aana has consistently shown short-term reductions in weight, BMI, BMI percentile, and BMI z-score. Results from analysis of long-term data suggest that this intervention had beneficial effects on body composition in an ethnically diverse population of obese children. Further research which includes a control group, larger sample size, and cost-analysis should be conducted.^
Resumo:
Producers utilizing a two year rotation of corn and soybean often apply fertilizer on a biannual basis, spreading recommended amounts of phosphorus and potassium for both crops prior to corn establishment. This approach minimizes application costs and is in accordance with university fertility recommendations that have found a low probability of fertilizer yield response when soils tested at the medium/optimum level or above. However, the field trials on which these state recommendations were developed are often several decades old. Increases in average corn and soybean yields and associated increases in crop nutrient removal rates have called into question the validity of these recommendations for current production environments. This study investigated the response of soil test levels and grain yield to annual and biannual fertilizer applications made at 1x and 2x rates of current university fertilizer recommendations.