956 resultados para management of knowledge organization
Resumo:
Enophthalmos is a relatively frequent and misdiagnosed clinical sign in orbital diseases. The knowledge of the different etiologies of enophthalmos and its adequate management are important, because in some cases, it could be the first sign revealing a life-threatening disease. This article provides a comprehensive review of the pathophysiology, evaluation, and management of enophthalmos. The main etiologies, such as trauma, chronic maxillary atelectasis (silent sinus syndrome), breast cancer metastasis, and orbital varix, will be discussed. Its objective is to enable the reader to recognize, assess, and treat the spectrum of disorders causing enophthalmos.
Resumo:
Rapid diagnostic tests (RDT) are sometimes recommended to improve the home-based management of malaria. The accuracy of an RDT for the detection of clinical malaria and the presence of malarial parasites has recently been evaluated in a high-transmission area of southern Mali. During the same study, the cost-effectiveness of a 'test-and-treat' strategy for the home-based management of malaria (based on an artemisinin-combination therapy) was compared with that of a 'treat-all' strategy. Overall, 301 patients, of all ages, each of whom had been considered a presumptive case of uncomplicated malaria by a village healthworker, were checked with a commercial RDT (Paracheck-Pf). The sensitivity, specificity, and positive and negative predictive values of this test, compared with the results of microscopy and two different definitions of clinical malaria, were then determined. The RDT was found to be 82.9% sensitive (with a 95% confidence interval of 78.0%-87.1%) and 78.9% (63.9%-89.7%) specific compared with the detection of parasites by microscopy. In the detection of clinical malaria, it was 95.2% (91.3%-97.6%) sensitive and 57.4% (48.2%-66.2%) specific compared with a general practitioner's diagnosis of the disease, and 100.0% (94.5%-100.0%) sensitive but only 30.2% (24.8%-36.2%) specific when compared against the fulfillment of the World Health Organization's (2003) research criteria for uncomplicated malaria. Among children aged 0-5 years, the cost of the 'test-and-treat' strategy, per episode, was about twice that of the 'treat-all' (U.S.$1.0. v. U.S.$0.5). In older subjects, however, the two strategies were equally costly (approximately U.S.$2/episode). In conclusion, for children aged 0-5 years in a high-transmission area of sub-Saharan Africa, use of the RDT was not cost-effective compared with the presumptive treatment of malaria with an ACT. In older patients, use of the RDT did not reduce costs. The question remains whether either of the strategies investigated can be made affordable for the affected population.
Resumo:
Physiology and current knowledge about gestational diabetes which led to the adoption of new diagnostic criterias and blood glucose target levels during pregnancy by the Swiss Society for Endocrinology and Diabetes are reviewed. The 6th International Workshop Conference on Gestational Diabetes mellitus in Pasedena (2008) defined new diagnostic criteria based on the results of the HAPO-Trial. These criteria were during the ADA congress in New Orleans in 2009 presented. According to the new criteria there is no need for screening, but all pregnant women have to be tested with a 75 g oral glucose tolerance test between the 24th and 28th week of pregnancy. The new diagnostic values are very similar to the ones previously adopted by the ADA with the exception that only one out of three values has to be elevated in order to make the diagnosis of gestational diabetes. Due to this important difference it is very likely that gestational diabetes will be diagnosed more frequently in the future. The diagnostic criteria are: Fasting plasma glucose > or = 5.1 mmol/l, 1-hour value > or = 10.0 mmol/l or 2-hour value > or = 8.5 mmol/l. Based on current knowledge and randomized trials it is much more difficult to define glucose target levels during pregnancy. This difficulty has led to many different recommendations issued by diabetes societies. The Swiss Society of Endocrinology and Diabetes follows the arguments of the International Diabetes Federation (IDF) that self-blood glucose monitoring itself lacks precision and that there are very few randomized trials. Therefore, the target levels have to be easy to remember and might be slightly different in mmol/l or mg/dl. The Swiss Society for Endocrinology and Diabetes adopts the tentative target values of the IDF with fasting plasma glucose values < 5.3 mM and 1- and 2-hour postprandial (after the end of the meal) values of < 8.0 and 7.0 mmol/l, respectively. The last part of these recommendations deals with the therapeutic options during pregnancy (nutrition, physical exercise and pharmaceutical treatment). If despite lifestyle changes the target values are not met, approximately 25 % of patients have to be treated pharmaceutically. Insulin therapy is still the preferred treatment option, but metformin (and as an exception glibenclamide) can be used, if there are major hurdles for the initiation of insulin therapy.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
BACKGROUND Enterococci are an important cause of central venous catheter (CVC)-associated bloodstream infections (CA-BSI). It is unclear whether CVC removal is necessary to successfully manage enterococcal CA-BSI. METHODS A 12-month retrospective cohort study of adults with enterococcal CA-BSI was conducted at a tertiary care hospital; clinical, microbiological and outcome data were collected. RESULTS A total of 111 patients had an enterococcal CA-BSI. The median age was 58.2 years (range 21 to 94 years). There were 45 (40.5%) infections caused by Entercoccus faecalis (among which 10 [22%] were vancomycin resistant), 61 (55%) by Enterococcus faecium (57 [93%] vancomycin resistant) and five (4.5%) by other Enterococcus species. Patients were treated with linezolid (n=51 [46%]), vancomycin (n=37 [33%]), daptomycin (n=11 [10%]), ampicillin (n=2 [2%]) or quinupristin/dalfopristin (n=2 [2%]); seven (n=6%) patients did not receive adequate enterococcal treatment. Additionally, 24 (22%) patients received adjunctive gentamicin treatment. The CVC was retained in 29 (26.1%) patients. Patients with removed CVCs showed lower rates of in-hospital mortality (15 [18.3%] versus 11 [37.9]; P=0.03), but similar rates of recurrent bacteremia (nine [11.0%] versus two (7.0%); P=0.7) and a similar post-BSI length of hospital stay (median days [range]) (11.1 [1.7 to 63.1 days] versus 9.3 [1.9 to 31.8 days]; P=0.3). Catheter retention was an independent predictor of mortality (OR 3.34 [95% CI 1.21 to 9.26]). CONCLUSIONS To the authors' knowledge, the present article describes the largest enterococcal CA-BSI series to date. Mortality was increased among patients who had their catheter retained. Additional prospective studies are necessary to determine the optimal management of enterococcal CA-BSI.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
The new Swiss Chronic Obstructive Pulmonary Disease (COPD) Guidelines are based on a previous version, which was published 10 years ago. The Swiss Respiratory Society felt the need to update the previous document due to new knowledge and novel therapeutic developments about this prevalent and important disease. The recommendations and statements are based on the available literature, on other national guidelines and, in particular, on the GOLD (Global Initiative for Chronic Obstructive Lung Disease) report. Our aim is to advise pulmonary physicians, general practitioners and other health care workers on the early detection and diagnosis, prevention, best symptomatic control, and avoidance of COPD as well as its complications and deterioration.
Resumo:
Introduction: As the population in the United States continues to age, more attention in primary practice settings is now devoted toward managing the care of the elderly. The occurrence of elder abuse is a growing problem. It is a condition many professionals in primary care may be ill prepared with the knowledge or resources to identify and manage. [See PDF for complete abstract]
Resumo:
Despite major improvements in diagnostics and interventional therapies, cardiovascular diseases remain a major health care and socio-economic burden both in western and developing countries, in which this burden is increasing in close correlation to economic growth. Health authorities and the general population have started to recognize that the fight against these diseases can only be won if their burden is faced by increasing our investment on interventions in lifestyle changes and prevention. There is an overwhelming evidence of the efficacy of secondary prevention initiatives including cardiac rehabilitation in terms of reduction in morbidity and mortality. However, secondary prevention is still too poorly implemented in clinical practice, often only on selected populations and over a limited period of time. The development of systematic and full comprehensive preventive programmes is warranted, integrated in the organization of national health systems. Furthermore, systematic monitoring of the process of delivery and outcomes is a necessity. Cardiology and secondary prevention, including cardiac rehabilitation, have evolved almost independently of each other and although each makes a unique contribution it is now time to join forces under the banner of preventive cardiology and create a comprehensive model that optimizes long term outcomes for patients and reduces the future burden on health care services. These are the aims that the Cardiac Rehabilitation Section of the European Association for Cardiovascular Prevention & Rehabilitation has foreseen to promote secondary preventive cardiology in clinical practice.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
In the aftermath of the 2008 crisis, scholars have begun to revise their conceptions of how market participants interact. While the traditional “rationalist optic” posits market participants who are able to process decisionrelevant information and thereby transform uncertainty into quantifiable risks, the increasingly popular “sociological optic” stresses the role of uncertainty in expectation formation and social conventions for creating confidence in markets. Applications of the sociological optic to concrete regulatory problems are still limited. By subjecting both optics to the same regulatory problem—the role of credit rating agencies (CRAs) and their ratings in capital markets—this paper provides insights into whether the sociological optic offers advice to tackle concrete regulatory problems and discusses the potential of the sociological optic in complementing the rationalist optic. The empirical application suggests that the sociological optic is not only able to improve our understanding of the role of CRAs and their ratings, but also to provide solutions complementary to those posited by the rationalist optic.
Resumo:
BACKGROUND Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (NG) are the most frequent causes of bacterial sexually transmitted infections (STIs). Management strategies that reduce losses in the clinical pathway from infection to cure might improve STI control and reduce complications resulting from lack of, or inadequate, treatment. OBJECTIVES To assess the effectiveness and safety of home-based specimen collection as part of the management strategy for Chlamydia trachomatis and Neisseria gonorrhoeae infections compared with clinic-based specimen collection in sexually-active people. SEARCH METHODS We searched the Cochrane Sexually Transmitted Infections Group Specialized Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and LILACS on 27 May 2015, together with the World Health Organization International Clinical Trials Registry (ICTRP) and ClinicalTrials.gov. We also handsearched conference proceedings, contacted trial authors and reviewed the reference lists of retrieved studies. SELECTION CRITERIA Randomized controlled trials (RCTs) of home-based compared with clinic-based specimen collection in the management of C. trachomatis and N. gonorrhoeae infections. DATA COLLECTION AND ANALYSIS Three review authors independently assessed trials for inclusion, extracted data and assessed risk of bias. We contacted study authors for additional information. We resolved any disagreements through consensus. We used standard methodological procedures recommended by Cochrane. The primary outcome was index case management, defined as the number of participants tested, diagnosed and treated, if test positive. MAIN RESULTS Ten trials involving 10,479 participants were included. There was inconclusive evidence of an effect on the proportion of participants with index case management (defined as individuals tested, diagnosed and treated for CT or NG, or both) in the group with home-based (45/778, 5.8%) compared with clinic-based (51/788, 6.5%) specimen collection (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.60 to 1.29; 3 trials, I² = 0%, 1566 participants, moderate quality). Harms of home-based specimen collection were not evaluated in any trial. All 10 trials compared the proportions of individuals tested. The results for the proportion of participants completing testing had high heterogeneity (I² = 100%) and were not pooled. We could not combine data from individual studies looking at the number of participants tested because the proportions varied widely across the studies, ranging from 30% to 96% in home group and 6% to 97% in clinic group (low-quality evidence). The number of participants with positive test was lower in the home-based specimen collection group (240/2074, 11.6%) compared with the clinic-based group (179/967, 18.5%) (RR 0.72, 95% CI 0.61 to 0.86; 9 trials, I² = 0%, 3041 participants, moderate quality). AUTHORS' CONCLUSIONS Home-based specimen collection could result in similar levels of index case management for CT or NG infection when compared with clinic-based specimen collection. Increases in the proportion of individuals tested as a result of home-based, compared with clinic-based, specimen collection are offset by a lower proportion of positive results. The harms of home-based specimen collection compared with clinic-based specimen collection have not been evaluated. Future RCTs to assess the effectiveness of home-based specimen collection should be designed to measure biological outcomes of STI case management, such as proportion of participants with negative tests for the relevant STI at follow-up.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
OBJECTIVES Rates of TB/HIV coinfection and multi-drug resistant (MDR)-TB are increasing in Eastern Europe (EE). We aimed to study clinical characteristics, factors associated with MDR-TB and predicted activity of empiric anti-TB treatment at time of TB diagnosis among TB/HIV coinfected patients in EE, Western Europe (WE) and Latin America (LA). DESIGN AND METHODS Between January 1, 2011, and December 31, 2013, 1413 TB/HIV patients (62 clinics in 19 countries in EE, WE, Southern Europe (SE), and LA) were enrolled. RESULTS Significant differences were observed between EE (N = 844), WE (N = 152), SE (N = 164), and LA (N = 253) in the proportion of patients with a definite TB diagnosis (47%, 71%, 72% and 40%, p<0.0001), MDR-TB (40%, 5%, 3% and 15%, p<0.0001), and use of combination antiretroviral therapy (cART) (17%, 40%, 44% and 35%, p<0.0001). Injecting drug use (adjusted OR (aOR) = 2.03 (95% CI 1.00-4.09), prior anti-TB treatment (3.42 (1.88-6.22)), and living in EE (7.19 (3.28-15.78)) were associated with MDR-TB. Among 585 patients with drug susceptibility test (DST) results, the empiric (i.e. without knowledge of the DST results) anti-TB treatment included ≥3 active drugs in 66% of participants in EE compared with 90-96% in other regions (p<0.0001). CONCLUSIONS In EE, TB/HIV patients were less likely to receive a definite TB diagnosis, more likely to house MDR-TB and commonly received empiric anti-TB treatment with reduced activity. Improved management of TB/HIV patients in EE requires better access to TB diagnostics including DSTs, empiric anti-TB therapy directed at both susceptible and MDR-TB, and more widespread use of cART.