932 resultados para case-based design
Resumo:
As a continuing effort to establish the structure-activity relationships (SARs) within the series of the angiotensin II antagonists (sartans), a pharmacophoric model was built by using novel TOPP 3D descriptors. Statistical values were satisfactory (PC4: r(2)=0.96, q(2) ((5) (random) (groups))=0.84; SDEP=0.26) and encouraged the synthesis and consequent biological evaluation of a series of new pyrrolidine derivatives. SAR together with a combined 3D quantitative SAR and high-throughput virtual screening showed that the newly synthesized 1-acyl-N-(biphenyl-4-ylmethyl)pyrrolidine-2-carboxamides may represent an interesting starting point for the design of new antihypertensive agents. In particular, biological tests performed on CHO-hAT(1) cells stably expressing the human AT(1) receptor showed that the length of the acyl chain is crucial for the receptor interaction and that the valeric chain is the optimal one.
Resumo:
Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.
Predictive models for chronic renal disease using decision trees, naïve bayes and case-based methods
Resumo:
Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.
Resumo:
The purpose of this study was to identify whether activity modeling framework supports problem analysis and provides a traceable and tangible connection from the problem identification up to solution modeling. Methodology validation relied on a real problem from a Portuguese teaching syndicate (ASPE), regarding courses development and management. The study was carried out with a perspective to elaborate a complete tutorial of how to apply activity modeling framework to a real world problem. Within each step of activity modeling, we provided a summary elucidation of the relevant elements required to perform it, pointed out some improvements and applied it to ASPE’s real problem. It was found that activity modeling potentiates well structured problem analysis as well as provides a guiding thread between problem and solution modeling. It was concluded that activity-based task modeling is key to shorten the gap between problem and solution. The results revealed that the solution obtained using activity modeling framework solved the core concerns of our customer and allowed them to enhance the quality of their courses development and management. The principal conclusion was that activity modeling is a properly defined methodology that supports software engineers in problem analysis, keeping a traceable guide among problem and solution.
Resumo:
Based on the genetic analysis of the phytopathogen Xylella fastidiosa genome, five media with defined composition were developed and the growth abilities of this fastidious prokaryote were evaluated in liquid media and on solid plates. All media had a common salt composition and included the same amounts of glucose and vitamins but differed in their amino acid content. XDM1 medium contained amino acids threonine, serine, glycine, alanine, aspartic acid and glutamic acid, for which complete degradation pathways occur in X fastidiosa; XDM2 included serine and methionine, amino acids for which biosynthetic enzymes are absent, plus asparagine and glutamine, which are abundant in the xylem sap; XDM3 had the same composition as XDM2 but with asparagine replaced by aspartic acid due to the presence of complete degradation pathway for aspartic acid; XDM4 was a minimal medium with glutamine as a sole nitrogen source; XDM5 had the same composition as XDM4, plus methionine. The liquid and solidified XDM2 and XDM3 media were the most effective for the growth of X. fastidiosa. This work opens the opportunity for the in silico design of bacterial defined media once their genome is sequenced. (C) 2002 Federation of European Microbiological Societies. Published by Elsevier B.V. B.V. All rights reserved.
Resumo:
Relaxed conditions for stability of nonlinear continuous-time systems given by fuzzy models axe presented. A theoretical analysis shows that the proposed method provides better or at least the same results of the methods presented in the literature. Digital simulations exemplify this fact. This result is also used for fuzzy regulators design. The nonlinear systems are represented by fuzzy models proposed by Takagi and Sugeno. The stability analysis and the design of controllers axe described by LMIs (Linear Matrix Inequalities), that can be solved efficiently using convex programming techniques.
Resumo:
Making diagnoses in oral pathology are often difficult and confusing in dental practice, especially for the lessexperienced dental student. One of the most promising areas in bioinformatics is computer-aided diagnosis, where a computer system is capable of imitating human reasoning ability and provides diagnoses with an accuracy approaching that of expert professionals. This type of system could be an alternative tool for assisting dental students to overcome the difficulties of the oral pathology learning process. This could allow students to define variables and information, important to improving the decision-making performance. However, no current open data management system has been integrated with an artificial intelligence system in a user-friendly environment. Such a system could also be used as an education tool to help students perform diagnoses. The aim of the present study was to develop and test an open case-based decisionsupport system.Methods: An open decision-support system based on Bayes' theorem connected to a relational database was developed using the C++ programming language. The software was tested in the computerisation of a surgical pathology service and in simulating the diagnosis of 43 known cases of oral bone disease. The simulation was performed after the system was initially filled with data from 401 cases of oral bone disease.Results: the system allowed the authors to construct and to manage a pathology database, and to simulate diagnoses using the variables from the database.Conclusion: Combining a relational database and an open decision-support system in the same user-friendly environment proved effective in simulating diagnoses based on information from an updated database.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In some applications with case-based system, the attributes available for indexing are better described as linguistic variables instead of receiving numerical treatment. In these applications, the concept of fuzzy hypercube can be applied to give a geometrical interpretation of similarities among cases. This paper presents an approach that uses geometrical properties of fuzzy hypercube space to make indexing and retrieval processes of cases.
Resumo:
A ligand-based drug design study was performed to acetaminophen regioisomers as analgesic candidates employing quantum chemical calculations at the DFT/B3LYP level of theory and the 6-31G* basis set. To do so, many molecular descriptors were used such as highest occupied molecular orbital, ionization potential, HO bond dissociation energies, and spin densities, which might be related to quench reactivity of the tyrosyl radical to give N-acetyl-p-benzosemiquinone-imine through an initial electron withdrawing or hydrogen atom abstraction. Based on this in silico work, the most promising molecule, orthobenzamol, was synthesized and tested. The results expected from the theoretical prediction were confirmed in vivo using mouse models of nociception such as writhing, paw licking, and hot plate tests. All biological results suggested an antinociceptive activity mediated by opioid receptors. Furthermore, at 90 and 120 min, this new compound had an effect that was comparable to morphine, the standard drug for this test. Finally, the pharmacophore model is discussed according to the electronic properties derived from quantum chemistry calculations.
Resumo:
In deterministic optimization, the uncertainties of the structural system (i.e. dimension, model, material, loads, etc) are not explicitly taken into account. Hence, resulting optimal solutions may lead to reduced reliability levels. The objective of reliability based design optimization (RBDO) is to optimize structures guaranteeing that a minimum level of reliability, chosen a priori by the designer, is maintained. Since reliability analysis using the First Order Reliability Method (FORM) is an optimization procedure itself, RBDO (in its classical version) is a double-loop strategy: the reliability analysis (inner loop) and the structural optimization (outer loop). The coupling of these two loops leads to very high computational costs. To reduce the computational burden of RBDO based on FORM, several authors propose decoupling the structural optimization and the reliability analysis. These procedures may be divided in two groups: (i) serial single loop methods and (ii) unilevel methods. The basic idea of serial single loop methods is to decouple the two loops and solve them sequentially, until some convergence criterion is achieved. On the other hand, uni-level methods employ different strategies to obtain a single loop of optimization to solve the RBDO problem. This paper presents a review of such RBDO strategies. A comparison of the performance (computational cost) of the main strategies is presented for several variants of two benchmark problems from the literature and for a structure modeled using the finite element method.
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
In this paper we present a new population-based method for the design of bone fixation plates. Standard pre-contoured plates are designed based on the mean shape of a certain population. We propose a computational process to design implants while reducing the amount of required intra-operative shaping, thus reducing the mechanical stresses applied to the plate. A bending and torsion model was used to measure and minimize the necessary intra-operative deformation. The method was applied and validated on a population of 200 femurs that was further augmented with a statistical shape model. The obtained results showed substantial reduction in the bending and torsion needed to shape the new design into any bone in the population when compared to the standard mean-based plates.
Resumo:
In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.