839 resultados para Analytical Criteria
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
Resumo:
Geographic information systems give us the possibility to analyze, produce, and edit geographic information. Furthermore, these systems fall short on the analysis and support of complex spatial problems. Therefore, when a spatial problem, like land use management, requires a multi-criteria perspective, multi-criteria decision analysis is placed into spatial decision support systems. The analytic hierarchy process is one of many multi-criteria decision analysis methods that can be used to support these complex problems. Using its capabilities we try to develop a spatial decision support system, to help land use management. Land use management can undertake a broad spectrum of spatial decision problems. The developed decision support system had to accept as input, various formats and types of data, raster or vector format, and the vector could be polygon line or point type. The support system was designed to perform its analysis for the Zambezi river Valley in Mozambique, the study area. The possible solutions for the emerging problems had to cover the entire region. This required the system to process large sets of data, and constantly adjust to new problems’ needs. The developed decision support system, is able to process thousands of alternatives using the analytical hierarchy process, and produce an output suitability map for the problems faced.
Resumo:
Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.
Resumo:
Background: Distinguishing postmortem gas accumulations in the body due to natural decomposition and other phenomena such as gas embolism can prove a difficult task using purely Multi-Detector Computed Tomography (MDCT). The Radiological Alteration Index (RAI) was created with the intention to be able to identify bodies undergoing the putrefaction process based on the quantity of gas detected within the body. The flaw in this approach is the inability to absolutely determine putrefaction as the origin of gas volumes in cases of moderate alteration. The aim of the current study is to identify percentage compositions of O2, N2, CO2 and the presence of gases such as H2 and H2S within these sampling sites in order to resolve this complication. Materials and methods: All cases investigated in our University Center of Legal Medicine are undergoing a Post-Mortem Computed Tomography (PMCT)-scan before external examination or autopsy as a routine investigation. In the obtained images, areas of gas were characterized as 0, I, II or III based on the amount of gas present according to the RAI (1). The criteria for these characterizations were dependent of the site of gas, for example thoracic and abdominal cavities were graded as I (1 - 3cm gas), II (3 - 5cm gas) and III (>5cm gas). Cases showing gaseous sites with grade II or III were selected for this study. The sampling was performed under CT-guidance to target the regions to be punctured. Luer-lock PTFE syringes equipped with a three-way valve and needles were used to sample the gas directly (2). Gaseous samples were then analysed using gas chromatography coupled to a thermal conductivity detector (GC-TCD). The components present in the samples were expressed as a percentage of the overall gas present. Results: Up to now, we have investigated more than 40 cases using our standardized procedure for sampling and analysis of gas. O2, N2 and CO2 were present in most samples. The following distributions were found to correlate to gas origins of gas embolism/scuba diving accidents, trauma and putrefaction: ? Putrefaction → O2 = 1 - 5%; CO2 > 15%; N2 = 10 - 70%; H2 / H2S / CH4 variable presence ? Gas embolism/Scuba diving accidents → O2 and N2= varying percentages; CO2 > 20% ? Trauma → O2 = small percentage; CO2 < 15%; N2 > 65% H2 and H2S indicated levels of putrefaction along with methane which can also gauge environmental conditions or conditions of body storage/burial. Many cases showing large RAI values (advanced alteration) did reveal a radiological diagnosis which was in concordance with the interpretation of the gas composition. However, in certain cases (gas embolism, scuba divers) radiological interpretation was not possible and only chemical gas analysis was found to lead to the correct diagnosis, meaning that it provided complementary information to the radiological diagnosis. Conclusion: Investigation of postmortem gases is a useful tool to determine origin of gas generation which can aid the diagnosis of the cause of death. Levels of gas can provide information on stage of putrefaction and help to perform essential medico-legal diagnosis such as vital gas embolism.
Resumo:
Background. A software based tool has been developed (Optem) to allow automatize the recommendations of the Canadian Multiple Sclerosis Working Group for optimizing MS treatment in order to avoid subjective interpretation. METHODS: Treatment Optimization Recommendations (TORs) were applied to our database of patients treated with IFN beta1a IM. Patient data were assessed during year 1 for disease activity, and patients were assigned to 2 groups according to TOR: "change treatment" (CH) and "no change treatment" (NCH). These assessments were then compared to observed clinical outcomes for disease activity over the following years. RESULTS: We have data on 55 patients. The "change treatment" status was assigned to 22 patients, and "no change treatment" to 33 patients. The estimated sensitivity and specificity according to last visit status were 73.9% and 84.4%. During the following years, the Relapse Rate was always higher in the "change treatment" group than in the "no change treatment" group (5 y; CH: 0.7, NCH: 0.07; p < 0.001, 12 m - last visit; CH: 0.536, NCH: 0.34). We obtained the same results with the EDSS (4 y; CH: 3.53, NCH: 2.55, annual progression rate in 12 m - last visit; CH: 0.29, NCH: 0.13). CONCLUSION: Applying TOR at the first year of therapy allowed accurate prediction of continued disease activity in relapses and disability progression.
Resumo:
BACKGROUND. Total knee (TKR) and hip (THR) replacement (arthroplasty) are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. METHODS. The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form) and the specific WOMAC (Western Ontario and Mcmaster Universities) scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated.The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD), mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95), variation coefficient (CV5-95) and weighted variation coefficient (WCV5-95) for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. RESULTS. The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients). However, a moderate-high variability was detected in subjective-objective signs. Among all the surgeries performed, approximately a quarter of them could be considered to be inappropriate. CONCLUSIONS. A greater inter-hospital variability was observed for objective than for subjective signs for both procedures, suggesting that the differences in clinical criteria followed by surgeons when indicating arthroplasty are the main responsible factors for the variation in surgery rates.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
The fight against doping in sports has been governed since 1999 by the World Anti-Doping Agency (WADA), an independent institution behind the implementation of the World Anti-Doping Code (Code). The intent of the Code is to protect clean athletes through the harmonization of anti-doping programs at the international level with special attention to detection, deterrence and prevention of doping.1 A new version of the Code came into force on January 1st 2015, introducing, among other improvements, longer periods of sanctioning for athletes (up to four years) and measures to strengthen the role of anti-doping investigations and intelligence. To ensure optimal harmonization, five International Standards covering different technical aspects of the Code are also currently in force: the List of Prohibited Substances and Methods (List), Testing and Investigations, Laboratories, Therapeutic Use Exemptions (TUE) and Protection of Privacy and Personal Information. Adherence to these standards is mandatory for all anti-doping stakeholders to be compliant with the Code. Among these documents, the eighth version of International Standard for Laboratories (ISL), which also came into effect on January 1st 2015, includes regulations for WADA and ISO/IEC 17025 accreditations and their application for urine and blood sample analysis by anti-doping laboratories.2 Specific requirements are also described in several Technical Documents or Guidelines in which various topics are highlighted such as the identification criteria for gas chromatography (GC) and liquid chromatography (LC) coupled to mass spectrometry (MS) techniques (IDCR), measurements and reporting of endogenous androgenic anabolic agents (EAAS) and analytical requirements for the Athlete Biological Passport (ABP).
Resumo:
The adoption of a proper traceability system is being incorporated into meat production practices as a method of gaining consumer confidence. The various partners operating in the chain of meat production can be considered a social network, and they have the common goal of generating a communication process that can ensure each characteristic of the product, including safety. This study aimed to select the most appropriate meat traceability system “from farm to fork” that could be applied to Brazilian beef and pork production for international trade. The research was done in three steps. The first used the analytical hierarchy process (AHP) for selecting the best on-farm livestock traceability. In the second step, the actors in the meat production chain were identified to build a framework and defined each role in the network. In the third step, the selection of the traceability system was done. Results indicated that with an electronic traceability system, it is possible to acquire better connections between the links in the chain and to provide the means for managing uncertainties by creating structures that facilitate information flow more efficiently.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
O aumento de empresas que operam internacionalmente requer o desenvolvimento de líderes mundiais para colocar as estratégias em prática. Embora este processo de desenvolvimento é importante para o mundo corporativo, muitos futuros executivos são graduados de escolas de administração de empresas que estão intimamente ligados ao mundo de negócios e, portanto, desempenhão um papel importante no processo. Esta pesquisa examina se os programas europeus “Master in Management” classificado pelo Financial Times em 2010 selecionam aqueles candidatos que são mais adequados para o desenvolvimento de liderança global. Portanto, três anteriores meta-estudos são sintetizados para produzir um perfil de competências classificadas de um líder global. Então, informações sobre os critérios de admissão dos programas de mestrado são coletadas e comparadas com este perfil. Os resultados mostram que seis competências são medidas por mais da metade dos programas: proficiência em Inglês, capacidade analítica (racionamento lógico e quantitativo), capacidade de comunicação, conhecimento do negócio global, determinação para alcançar, motivação e capacidade interpessoal. Além disso, as habilidades operacionais requerentes pelos líderes globais não são significativas no processo de admissão e o foco é sobre as habilidades analíticas. Comparação dos resultados com o perfil anteriormente desenvolvido abrangente indica que uma quantidade significativa de programas pode subestimar o significado de habilidades pessoais e características para o desenvolvimento de líderes globais.
Resumo:
DANTAS, Rodrigo Assis Neves; NÓBREGA, Walkíria Gomes da; MORAIS FILHO, Luiz Alves; MACÊDO, Eurides Araújo Bezerra de ; FONSECA , Patrícia de Cássia Bezerra; ENDERS, Bertha Cruz; MENEZES, Rejane Maria Paiva de; TORRES , Gilson de Vasconcelos. Paradigms in health care and its relationship to the nursing theories: an analytical test . Revista de Enfermagem UFPE on line. v.4,n.2, p.16-24.abr/jun. 2010. Disponível em < http://www.ufpe.br/revistaenfermagem/index.php/revista>.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.
Resumo:
The fuzzy analytical network process (FANP) is introduced as a potential multi-criteria-decision-making (MCDM) method to improve digital marketing management endeavors. Today’s information overload makes digital marketing optimization, which is needed to continuously improve one’s business, increasingly difficult. The proposed FANP framework is a method for enhancing the interaction between customers and marketers (i.e., involved stakeholders) and thus for reducing the challenges of big data. The presented implementation takes realities’ fuzziness into account to manage the constant interaction and continuous development of communication between marketers and customers on the Web. Using this FANP framework, the marketers are able to increasingly meet the varying requirements of their customers. To improve the understanding of the implementation, advanced visualization methods (e.g., wireframes) are used.