20 resultados para Automated algorithms

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the ongoing attempts to enhance cognitive performance, an emergent and yet underrepresented venue is brought by hemoencefalographic neurofeedback (HEG). This paper presents three related advances in HEG neurofeedback for cognitive enhancement: a) a new HEG protocol for cognitive enhancement, as well as b) the results of independent measures of biological efficacy (EEG brain maps) extracted in three phases, during a one year follow up case study; c) the results of the first controlled clinical trial of HEG, designed to assess the efficacy of the technique for cognitive enhancement of an adult and neurologically intact population. The new protocol was developed in the environment of a software that organizes digital signal algorithms in a flowchart format. Brain maps were produced through 10 brain recordings. The clinical trial used a working memory test as its independent measure of achievement. The main conclusion of this study is that the technique appears to be clinically promising. Approaches to cognitive performance from a metabolic viewpoint should be explored further. However, it is particularly important to note that, to our knowledge, this is the world's first controlled clinical study on the matter and it is still early for an ultimate evaluation of the technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a survey of evolutionary algorithms that are designed for decision-tree induction. In this context, most of the paper focuses on approaches that evolve decision trees as an alternate heuristics to the traditional top-down divide-and-conquer approach. Additionally, we present some alternative methods that make use of evolutionary algorithms to improve particular components of decision-tree classifiers. The paper's original contributions are the following. First, it provides an up-to-date overview that is fully focused on evolutionary algorithms and decision trees and does not concentrate on any specific evolutionary approach. Second, it provides a taxonomy, which addresses works that evolve decision trees and works that design decision-tree components by the use of evolutionary algorithms. Finally, a number of references are provided that describe applications of evolutionary algorithms for decision-tree induction in different domains. At the end of this paper, we address some important issues and open questions that can be the subject of future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional abduction imposes as a precondition the restriction that the background information may not derive the goal data. In first-order logic such precondition is, in general, undecidable. To avoid such problem, we present a first-order cut-based abduction method, which has KE-tableaux as its underlying inference system. This inference system allows for the automation of non-analytic proofs in a tableau setting, which permits a generalization of traditional abduction that avoids the undecidable precondition problem. After demonstrating the correctness of the method, we show how this method can be dynamically iterated in a process that leads to the construction of non-analytic first-order proofs and, in some terminating cases, to refutations as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A sensitive, selective, and reproducible in-tube solid-phase microextraction and liquid chromatographic (in-tube SPME/LC-UV) method for determination of lidocaine and its metabolite monoethylglycinexylidide (MEGX) in human plasma has been developed, validated, and further applied to pharmacokinetic study in pregnant women with gestational diabetes mellitus (GDM) subjected to epidural anesthesia. Important factors in the optimization of in-tube SPME performance are discussed, including the draw/eject sample volume, draw/eject cycle number, draw/eject flow rate, sample pH, and influence of plasma proteins. The limits of quantification of the in-tube SPME/LC method were 50 ng/mL for both metabolite and lidocaine. The interday and intraday precision had coefficients of variation lower than 8%, and accuracy ranged from 95 to 117%. The response of the in-tube SPME/LC method for analytes was linear over a dynamic range from 50 to 5000 ng/mL, with correlation coefficients higher than 0.9976. The developed in-tube SPME/LC method was successfully used to analyze lidocaine and its metabolite in plasma samples from pregnant women with GDM subjected to epidural anesthesia for pharmacokinetic study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffuse large B-cell lymphoma can be subclassified into at least two molecular subgroups by gene expression profiling: germinal center B-cell like and activated B-cell like diffuse large B-cell lymphoma. Several immunohistological algorithms have been proposed as surrogates to gene expression profiling at the level of protein expression, but their reliability has been an issue of controversy. Furthermore, the proportion of misclassified cases of germinal center B-cell subgroup by immunohistochemistry, in all reported algorithms, is higher compared with germinal center B-cell cases defined by gene expression profiling. We analyzed 424 cases of nodal diffuse large B-cell lymphoma with the panel of markers included in the three previously described algorithms: Hans, Choi, and Tally. To test whether the sensitivity of detecting germinal center B-cell cases could be improved, the germinal center B-cell marker HGAL/GCET2 was also added to all three algorithms. Our results show that the inclusion of HGAL/GCET2 significantly increased the detection of germinal center B-cell cases in all three algorithms (P<0.001). The proportions of germinal center B-cell cases in the original algorithms were 27%, 34%, and 19% for Hans, Choi, and Tally, respectively. In the modified algorithms, with the inclusion of HGAL/GCET2, the frequencies of germinal center B-cell cases were increased to 38%, 48%, and 35%, respectively. Therefore, HGAL/GCET2 protein expression may function as a marker for germinal center B-cell type diffuse large B-cell lymphoma. Consideration should be given to the inclusion of HGAL/GCET2 analysis in algorithms to better predict the cell of origin. These findings bear further validation, from comparison to gene expression profiles and from clinical/therapeutic data. Modern Pathology (2012) 25, 1439-1445; doi: 10.1038/modpathol.2012.119; published online 29 June 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Automated weaning modes are available in some mechanical ventilators, but no studies compared them hitherto. We compared the performance of 3 automated modes under standard and challenging situations. Methods: We used a lung simulator to compare 3 automated modes, adaptive support ventilation (ASV), mandatory rate ventilation (MRV), and Smartcare, in 6 situations, weaning success, weaning failure, weaning success with extreme anxiety, weaning success with Cheyne-Stokes, weaning success with irregular breathing, and weaning failure with ineffective efforts. Results: The 3 modes correctly recognized the situations of weaning success and failure, even when anxiety or irregular breathing were present but incorrectly recognized weaning success with Cheyne-Stokes. MRV incorrectly recognized weaning failure with ineffective efforts. Time to pressure support (PS) stabilization was shorter for ASV (1-2 minutes for all situations) and MRV (1-7 minutes) than for Smartcare (8-78 minutes). ASV had higher rates of PS oscillations per 5 minutes (4-15), compared with Smartcare (0-1) and MRV (0-12), except when extreme anxiety was present. Conclusions: Smartcare, ASV, and MRV were equally able to recognize weaning success and failure, despite the presence of anxiety or irregular breathing but performed incorrectly in the presence of Cheyne-Stokes. PS behavior over the time differs among modes, with ASV showing larger and more frequent PS oscillations over the time. Clinical studies are needed to confirm our results. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aimed to apply genetic algorithms (GA) and particle swarm optimization (PSO) in cash balance management using Miller-Orr model, which consists in a stochastic model that does not define a single ideal point for cash balance, but an oscillation range between a lower bound, an ideal balance and an upper bound. Thus, this paper proposes the application of GA and PSO to minimize the Total Cost of cash maintenance, obtaining the parameter of the lower bound of the Miller-Orr model, using for this the assumptions presented in literature. Computational experiments were applied in the development and validation of the models. The results indicated that both the GA and PSO are applicable in determining the cash level from the lower limit, with best results of PSO model, which had not yet been applied in this type of problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent experimental evidence has suggested a neuromodulatory deficit in Alzheimer's disease (AD). In this paper, we present a new electroencephalogram (EEG) based metric to quantitatively characterize neuromodulatory activity. More specifically, the short-term EEG amplitude modulation rate-of-change (i.e., modulation frequency) is computed for five EEG subband signals. To test the performance of the proposed metric, a classification task was performed on a database of 32 participants partitioned into three groups of approximately equal size: healthy controls, patients diagnosed with mild AD, and those with moderate-to-severe AD. To gauge the benefits of the proposed metric, performance results were compared with those obtained using EEG spectral peak parameters which were recently shown to outperform other conventional EEG measures. Using a simple feature selection algorithm based on area-under-the-curve maximization and a support vector machine classifier, the proposed parameters resulted in accuracy gains, relative to spectral peak parameters, of 21.3% when discriminating between the three groups and by 50% when mild and moderate-to-severe groups were merged into one. The preliminary findings reported herein provide promising insights that automated tools may be developed to assist physicians in very early diagnosis of AD as well as provide researchers with a tool to automatically characterize cross-frequency interactions and their changes with disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To review the clinical characteristics of patients with neuromyelitis optica (NMO) and to compare their visual outcome with those of patients with optic neuritis (ON) and multiple sclerosis (MS). Methods: Thirty-three patients with NMO underwent neuro-ophthalmic evaluation, including automated perimetry along with 30 patients with MS. Visual function in both groups was compared overall and specifically for eyes after a single episode of ON. Results: Visual function and average visual field (VF) mean deviation were significantly worse in eyes of patients with NMO. After a single episode of ON, the VF was normal in only 2 of 36 eyes of patients with NMO compared to 17 of 35 eyes with MS (P < 0.001). The statistical analysis indicated that after a single episode of ON, the odds ratio for having NMO was 6.0 (confidence interval [CI]: 1.6-21.9) when VF mean deviation was worse than -20.0 dB while the odds ratio for having MS was 16.0 (CI: 3.6-68.7) when better than -3.0 dB. Conclusion: Visual outcome was significantly worse in NMO than in MS. After a single episode of ON, suspicion of NMO should be raised in the presence of severe residual VF deficit with automated perimetry and lowered in the case of complete VF recovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the relationship between glaucomatous structural damage assessed by the Cirrus Spectral Domain OCT (SDOCT) and functional loss as measured by standard automated perimetry (SAP). Methods: Four hundred twenty-two eyes (78 healthy, 210 suspects, 134 glaucomatous) of 250 patients were recruited from the longitudinal Diagnostic Innovations in Glaucoma Study and from the African Descent and Glaucoma Evaluation Study. All eyes underwent testing with the Cirrus SDOCT and SAP within a 6-month period. The relationship between parapapillary retinal nerve fiber layer thickness (RNFL) sectors and corresponding topographic SAP locations was evaluated using locally weighted scatterplot smoothing and regression analysis. SAP sensitivity values were evaluated using both linear as well as logarithmic scales. We also tested the fit of a model (Hood) for structure-function relationship in glaucoma. Results: Structure was significantly related to function for all but the nasal thickness sector. The relationship was strongest for superotemporal RNFL thickness and inferonasal sensitivity (R(2) = 0.314, P < 0.001). The Hood model fitted the data relatively well with 88% of the eyes inside the 95% confidence interval predicted by the model. Conclusions: RNFL thinning measured by the Cirrus SDOCT was associated with correspondent visual field loss in glaucoma.