885 resultados para Methods for Multi-criteria Evaluation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Química e Bioquímica

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract:INTRODUCTION:The Montenegro skin test (MST) has good clinical applicability and low cost for the diagnosis of American tegumentary leishmaniasis (ATL). However, no studies have validated the reference value (5mm) typically used to discriminate positive and negative results. We investigated MST results and evaluated its performance using different cut-off points.METHODS:The results of laboratory tests for 4,256 patients with suspected ATL were analyzed, and 1,182 individuals were found to fulfill the established criteria. Two groups were formed. The positive cutaneous leishmaniasis (PCL) group included patients with skin lesions and positive direct search for parasites (DS) results. The negative cutaneous leishmaniasis (NCL) group included patients with skin lesions with evolution up to 2 months, negative DS results, and negative indirect immunofluorescence assay results who were residents of urban areas that were reported to be probable sites of infection at domiciles and peridomiciles.RESULTS:The PCL and NCL groups included 769 and 413 individuals, respectively. The mean ± standard deviation MST in the PCL group was 12.62 ± 5.91mm [95% confidence interval (CI): 12.20-13.04], and that in the NCL group was 1.43 ± 2.17mm (95% CI: 1.23-1.63). Receiver-operating characteristic curve analysis indicated 97.4% sensitivity and 93.9% specificity for a cut-off of 5mm and 95.8% sensitivity and 97.1% specificity for a cut-off of 6mm.CONCLUSIONS:Either 5mm or 6mm could be used as the cut-off value for diagnosing ATL, as both values had high sensitivity and specificity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract: INTRODUCTION: The dengue classification proposed by the World Health Organization (WHO) in 2009 is considered more sensitive than the classification proposed by the WHO in 1997. However, no study has assessed the ability of the WHO 2009 classification to identify dengue deaths among autopsied individuals suspected of having dengue. In the present study, we evaluated the ability of the WHO 2009 classification to identify dengue deaths among autopsied individuals suspected of having dengue in Northeast Brazil, where the disease is endemic. METHODS: This retrospective study included 121 autopsied individuals suspected of having dengue in Northeast Brazil during the epidemics of 2011 and 2012. All the autopsied individuals included in this study were confirmed to have dengue based on the findings of laboratory examinations. RESULTS: The median age of the autopsied individuals was 34 years (range, 1 month to 93 years), and 54.5% of the individuals were males. According to the WHO 1997 classification, 9.1% (11/121) of the cases were classified as dengue hemorrhagic fever (DHF) and 3.3% (4/121) as dengue shock syndrome. The remaining 87.6% (106/121) of the cases were classified as dengue with complications. According to the 2009 classification, 100% (121/121) of the cases were classified as severe dengue. The absence of plasma leakage (58.5%) and platelet counts <100,000/mm3 (47.2%) were the most frequent reasons for the inability to classify cases as DHF. CONCLUSIONS: The WHO 2009 classification is more sensitive than the WHO 1997 classification for identifying dengue deaths among autopsied individuals suspected of having dengue.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of genome-scale metabolic models has been rapidly increasing in fields such as metabolic engineering. An important part of a metabolic model is the biomass equation since this reaction will ultimately determine the predictive capacity of the model in terms of essentiality and flux distributions. Thus, in order to obtain a reliable metabolic model the biomass precursors and their coefficients must be as precise as possible. Ideally, determination of the biomass composition would be performed experimentally, but when no experimental data are available this is established by approximation to closely related organisms. Computational methods however, can extract some information from the genome such as amino acid and nucleotide compositions. The main objectives of this study were to compare the biomass composition of several organisms and to evaluate how biomass precursor coefficients affected the predictability of several genome-scale metabolic models by comparing predictions with experimental data in literature. For that, the biomass macromolecular composition was experimentally determined and the amino acid composition was both experimentally and computationally estimated for several organisms. Sensitivity analysis studies were also performed with the Escherichia coli iAF1260 metabolic model concerning specific growth rates and flux distributions. The results obtained suggest that the macromolecular composition is conserved among related organisms. Contrasting, experimental data for amino acid composition seem to have no similarities for related organisms. It was also observed that the impact of macromolecular composition on specific growth rates and flux distributions is larger than the impact of amino acid composition, even when data from closely related organisms are used.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Under the framework of constraint based modeling, genome-scale metabolic models (GSMMs) have been used for several tasks, such as metabolic engineering and phenotype prediction. More recently, their application in health related research has spanned drug discovery, biomarker identification and host-pathogen interactions, targeting diseases such as cancer, Alzheimer, obesity or diabetes. In the last years, the development of novel techniques for genome sequencing and other high-throughput methods, together with advances in Bioinformatics, allowed the reconstruction of GSMMs for human cells. Considering the diversity of cell types and tissues present in the human body, it is imperative to develop tissue-specific metabolic models. Methods to automatically generate these models, based on generic human metabolic models and a plethora of omics data, have been proposed. However, their results have not yet been adequately and critically evaluated and compared. This work presents a survey of the most important tissue or cell type specific metabolic model reconstruction methods, which use literature, transcriptomics, proteomics and metabolomics data, together with a global template model. As a case study, we analyzed the consistency between several omics data sources and reconstructed distinct metabolic models of hepatocytes using different methods and data sources as inputs. The results show that omics data sources have a poor overlapping and, in some cases, are even contradictory. Additionally, the hepatocyte metabolic models generated are in many cases not able to perform metabolic functions known to be present in the liver tissue. We conclude that reliable methods for a priori omics data integration are required to support the reconstruction of complex models of human cells.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Durant els darrers anys, sâhan publicat un gran nombre de materials multimèdia destinats a lâaprenentatge de llengües, la major part dels quals son CD-ROM dissenyats com a cursos per lâautoaprenentatge. Amb aquests materials, els alumnes poden treballar independentment sense lâassessorament dâun professor, i per aquest motiu sâha afirmat que promouen i faciliten lâaprenentatge autònom. Aquesta relació, però, no es certa, com Phil Benson i Peter Voller 1997:10) han manifestat encertadament:(â¦) Such claims are often dubious, however, because of the limited range of options and roles offered to the learner. Nevertheless, technologies of education in the broadest sense can be considered to be either more or less supportive of autonomy. The question is what kind of criteria do we apply in evaluating them? En aquest article presentem una investigació conjunta on es defineixen els criteris que poden ser utilitzats per avaluar materials multimèdia en relació a la seva facilitat per permetre lâaprenentatge autònom. Aquests criteris son la base dâun qüestionari que sâha emprat per avaluar una selecció de CD-ROM destinats a lâautoaprenentatge de llengües. La estructura dâaquest article és la següent: - Una introducció de lâestudi - Els criteris que sâhan utilitzar per la creació del qüestionari - Els resultats generals de lâavaluació - Les conclusions que sâhan extret i la seva importància pel disseny instructiu multimèdia

Relevância:

40.00% 40.00%

Publicador:

Resumo:

STUDY DESIGN: Prospective, controlled, observational outcome study using clinical, radiographic, and patient/physician-based questionnaire data, with patient outcomes at 12 months follow-up. OBJECTIVE: To validate appropriateness criteria for low back surgery. SUMMARY OF BACKGROUND DATA: Most surgical treatment failures are attributed to poor patient selection, but no widely accepted consensus exists on detailed indications for appropriate surgery. METHODS: Appropriateness criteria for low back surgery have been developed by a multispecialty panel using the RAND appropriateness method. Based on panel criteria, a prospective study compared outcomes of patients appropriately and inappropriately treated at a single institution with 12 months follow-up assessment. Included were patients with low back pain and/or sciatica referred to the neurosurgical department. Information about symptoms, neurologic signs, the health-related quality of life (SF-36), disability status (Roland-Morris), and pain intensity (VAS) was assessed at baseline, at 6 months, and at 12 months follow-up. The appropriateness criteria were administered prospectively to each clinical situation and outside of the clinical setting, with the surgeon and patients blinded to the results of the panel decision. The patients were further stratified into 2 groups: appropriate treatment group (ATG) and inappropriate treatment group (ITG). RESULTS: Overall, 398 patients completed all forms at 12 months. Treatment was considered appropriate for 365 participants and inappropriate for 33 participants. The mean improvement in the SF-36 physical component score at 12 months was significantly higher in the ATG (mean: 12.3 points) than in the ITG (mean: 6.8 points) (P = 0.01), as well as the mean improvement in the SF-36 mental component score (ATG mean: 5.0 points; ITG mean: -0.5 points) (P = 0.02). Improvement was also significantly higher in the ATG for the mean VAS back pain (ATG mean: 2.3 points; ITG mean: 0.8 points; P = 0.02) and Roland-Morris disability score (ATG mean: 7.7 points; ITG mean: 4.2 points; P = 0.004). The ATG also had a higher improvement in mean VAS for sciatica (4.0 points) than the ITG (2.8 points), but the difference was not significant (P = 0.08). The SF-36 General Health score declined in both groups after 12 months, however, the decline was worse in the ITG (mean decline: 8.2 points) than in the ATG (mean decline: 1.2 points) (P = 0.04). Overall, in comparison to ITG patients, ATG patients had significantly higher improvement at 12 months, both statistically and clinically. CONCLUSION: In comparison to previously reported literature, our study is the first to assess the utility of appropriateness criteria for low back surgery at 1-year follow-up with multiple outcome dimensions. Our results confirm the hypothesis that application of appropriateness criteria can significantly improve patient outcomes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We introduce and analyze two new semi-discrete numerical methods for the multi-dimensional Vlasov-Poisson system. The schemes are constructed by combing a discontinuous Galerkin approximation to the Vlasov equation together with a mixed finite element method for the Poisson problem. We show optimal error estimates in the case of smooth compactly supported initial data. We propose a scheme that preserves the total energy of the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the antitumor activity and safety profile of plitidepsin administered as a 1h weekly intravenous (i.v.) infusion of 3.2mg/m(2) to patients with small cell lung cancer (SCLC) who relapsed or progressed after one line of chemotherapy. PATIENTS AND METHODS: This was a multicenter, open-label, single-arm, exploratory, phase II clinical trial. Treatment lasted until disease progression, unacceptable toxicity, patient refusal or treatment delay for &gt;2 weeks. Objective response rate (primary efficacy endpoint) was evaluated according to response evaluation criteria in solid tumors (RECIST). The rate of stable disease (SD) lasting for at least 6 months and time-to-event variables were secondary endpoints of efficacy. Toxicity was assessed using National Cancer Institute Common Toxicity Criteria (NCI-CTC) version 2.0. RESULTS: Twenty pretreated SCLC patients (median age, 60 years) with extensive (n=13) or limited-stage disease (n=7) received a total of 24 treatment cycles (median, one cycle per patient; range, 1-2). Objective tumor responses were not observed and only one of the 17 evaluable patients had SD. With a median follow-up of 11.8 months, the progression-free survival and the median overall survival were 1.3 months and 4.8 months, respectively. The most troubling or common toxicities were fatigue, muscle weakness, lymphopenia, anemia (no patients showed neutropenia), and asymptomatic, non-cumulative increase of transaminases levels and alkaline phosphatase. CONCLUSION: This clinical trial shows that a cycle of 1h weekly i.v. infusion of plitidepsin (3.2mg/m(2)) was generally well tolerated other than fatigue and muscle weakness in patients with pretreated SCLC. One patient died due to multi-organ failure. The absence of antitumor activity found here precludes further studies of this plitidepsin schedule as second-line single-agent treatment of SCLC.