864 resultados para Feature selection algorithm
Resumo:
The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.
Resumo:
We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
This paper presents a method for automatic identification of dust devils tracks in MOC NA and HiRISE images of Mars. The method is based on Mathematical Morphology and is able to successfully process those images despite their difference in spatial resolution or size of the scene. A dataset of 200 images from the surface of Mars representative of the diversity of those track features was considered for developing, testing and evaluating our method, confronting the outputs with reference images made manually. Analysis showed a mean accuracy of about 92%. We also give some examples on how to use the results to get information about dust devils, namelly mean width, main direction of movement and coverage per scene. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The main goal of Regression Test (RT) is to reuse the test suite of the latest version of a software in its current version, in order to maximize the value of the tests already developed and ensure that old features continue working after the new changes. Even with reuse, it is common that not all tests need to be executed again. Because of that, it is encouraged to use Regression Tests Selection (RTS) techniques, which aims to select from all tests, only those that reveal faults, this reduces costs and makes this an interesting practice for the testing teams. Several recent research works evaluate the quality of the selections performed by RTS techniques, identifying which one presents the best results, measured by metrics such as inclusion and precision. The RTS techniques should seek in the System Under Test (SUT) for tests that reveal faults. However, because this is a problem without a viable solution, they alternatively seek for tests that reveal changes, where faults may occur. Nevertheless, these changes may modify the execution flow of the algorithm itself, leading some tests no longer exercise the same stretch. In this context, this dissertation investigates whether changes performed in a SUT would affect the quality of the selection of tests performed by an RTS, if so, which features the changes present which cause errors, leading the RTS to include or exclude tests wrongly. For this purpose, a tool was developed using the Java language to automate the measurement of inclusion and precision averages achieved by a regression test selection technique for a particular feature of change. In order to validate this tool, an empirical study was conducted to evaluate the RTS technique Pythia, based on textual differencing, on a large web information system, analyzing the feature of types of tasks performed to evolve the SUT
Resumo:
This paper introduces an improved tabu-based vector optimal algorithm for multiobjective optimal designs of electromagnetic devices. The improvements include a division of the entire search process, a new method for fitness assignment, a novel scheme for the generation and selection of neighborhood solutions, and so forth. Numerical results on a mathematical function and an engineering multiobjective design problem demonstrate that the proposed method can produce virtually the exact Pareto front, in both parameter and objective spaces, even though the iteration number used by it is only about 70% of that required by its ancestor.
Resumo:
An algorithm is presented that finds the optimal plan long-term transmission for till cases studied, including relatively large and complex networks. The knowledge of optimal plans is becoming more important in the emerging competitive environment, to which the correct economic signals have to be sent to all participants. The paper presents a new specialised branch-and-bound algorithm for transmission network expansion planning. Optimality is obtained at a cost, however: that is the use of a transportation model for representing the transmission network, in this model only the Kirchhoff current law is taken into account (the second law being relaxed). The expansion problem then becomes an integer linear program (ILP) which is solved by the proposed branch-and-bound method without any further approximations. To control combinatorial explosion the branch- and bound algorithm is specialised using specific knowledge about the problem for both the selection of candidate problems and the selection of the next variable to be used for branching. Special constraints are also used to reduce the gap between the optimal integer solution (ILP program) and the solution obtained by relaxing the integrality constraints (LP program). Tests have been performed with small, medium and large networks available in the literature.
Resumo:
Background: The increasing number of genomic sequences of bacteria makes it possible to select unique SNPs of a particular strain/species at the whole genome level and thus design specific primers based on the SNPs. The high similarity of genomic sequences among phylogenetically-related bacteria requires the identification of the few loci in the genome that can serve as unique markers for strain differentiation. PrimerSNP attempts to identify reliable strain-specific markers, on which specific primers are designed for pathogen detection purpose.Results: PrimerSNP is an online tool to design primers based on strain specific SNPs for multiple strains/species of microorganisms at the whole genome level. The allele-specific primers could distinguish query sequences of one strain from other homologous sequences by standard PCR reaction. Additionally, PrimerSNP provides a feature for designing common primers that can amplify all the homologous sequences of multiple strains/species of microorganisms. PrimerSNP is freely available at http://cropdisease.ars.usda.gov/similar to primer.Conclusion: PrimerSNP is a high-throughput specific primer generation tool for the differentiation of phylogenetically-related strains/species. Experimental validation showed that this software had a successful prediction rate of 80.4 - 100% for strain specific primer design.
Resumo:
The main purpose of this work is the development of computational tools in order to assist the on-line automatic detection of burn in the surface grinding process. Most of the parameters currently employed in the burning recognition (DPO, FKS, DPKS, DIFP, among others) do not incorporate routines for automatic selection of the grinding passes, therefore, requiring the user's interference for the choice of the active region. Several methods were employed in the passes extraction; however, those with the best results are presented in this article. Tests carried out in a surface-grinding machine have shown the success of the algorithms developed for pass extraction. Copyright © 2007 by ABCM.
Resumo:
The risk for venous thromboembolism (VTE) in medical patients is high, but risk assessment is rarely performed because there is not yet a good method to identify candidates for prophylaxis. Purpose: To perform a systematic review about VTE risk factors (RFs) in hospitalized medical patients and generate recommendations (RECs) for prophylaxis that can be implemented into practice. Data sources: A multidisciplinary group of experts from 12 Brazilian Medical Societies searched MEDLINE, Cochrane, and LILACS. Study selection: Two experts independently classified the evidence for each RF by its scientific quality in a standardized manner. A risk-assessment algorithm was created based on the results of the review. Data synthesis: Several VTE RFs have enough evidence to support RECs for prophylaxis in hospitalized medical patients (eg, increasing age, heart failure, and stroke). Other factors are considered adjuncts of risk (eg, varices, obesity, and infections). According to the algorithm, hospitalized medical patients ≥40 years-old with decreased mobility, and ≥1 RFs should receive chemoprophylaxis with heparin, provided they don't have contraindications. High prophylactic doses of unfractionated heparin or low-molecular-weight-heparin must be administered and maintained for 6-14 days. Conclusions: A multidisciplinary group generated evidence-based RECs and an easy-to-use algorithm to facilitate VTE prophylaxis in medical patients. © 2007 Rocha et al, publisher and licensee Dove Medical Press Ltd.
Resumo:
This paper presents an algorithm to solve the network transmission system expansion planning problem using the DC model which is a mixed non-linear integer programming problem. The major feature of this work is the use of a Branch-and-Bound (B&B) algorithm to directly solve mixed non-linear integer problems. An efficient interior point method is used to solve the non-linear programming problem at each node of the B&B tree. Tests with several known systems are presented to illustrate the performance of the proposed method. ©2007 IEEE.
Resumo:
This paper presents an approach for structural health monitoring (SHM) by using adaptive filters. The experimental signals from different structural conditions provided by piezoelectric actuators/sensors bonded in the test structure are modeled by a discrete-time recursive least square (RLS) filter. The biggest advantage to use a RLS filter is the clear possibility to perform an online SHM procedure since that the identification is also valid for non-stationary linear systems. An online damage-sensitive index feature is computed based on autoregressive (AR) portion of coefficients normalized by the square root of the sum of the square of them. The proposed method is then utilized in a laboratory test involving an aeronautical panel coupled with piezoelectric sensors/actuators (PZTs) in different positions. A hypothesis test employing the t-test is used to obtain the damage decision. The proposed algorithm was able to identify and localize the damages simulated in the structure. The results have shown the applicability and drawbacks the method and the paper concludes with suggestions to improve it. ©2010 Society for Experimental Mechanics Inc.
Resumo:
Body size is directly related to the productive and reproductive performance of beef cattle raised under free-range conditions. In an attempt to better plan selection criteria, avoiding extremes in body size, this study estimated the heritabilities and genetic correlations of yearling hip height (YH) and mature hip height (MH) with selection indices obtained at weaning (WI) and yearling (YI) and mature weight (MW). Data from 102,373 Nelore animals born between 1984 and 2010, which belong to 263 farms that participate in genetic evaluation programmes of beef cattle conducted in Brazil and Paraguay, were used. The (co)variance components and genetic parameters were estimated by Bayesian inference in multi-trait analysis using an animal model. The mean heritabilities for YH, MH and MW were 0. 56 ± 0. 06, 0. 47 ± 0. 02 and 0. 42 ± 0. 02, respectively. The genetic correlation of YH with WI (0. 13 ± 0. 01) and YI (0. 11 ± 0. 01) was practically zero, whereas a higher correlation was observed with MW (0. 22 ± 0. 03). Positive genetic correlations of medium magnitude were estimated between MH and WI and YI (0. 23 ± 0. 01 and 0. 43 ± 0. 02, respectively). On the other hand, a high genetic correlation (0. 68 ± 0. 03) was observed between the indicator traits of mature body size (MH and MW). Considering the top 20 % of sire (896 sires) in terms of breeding values for the yearling index, the rank sire correlations between breeding values for MH and MW was 0. 62. In general, the results indicate that selection based on WI and YI should not lead to important changes in YH. However, an undesired correlated response in mature cow height is expected, particularly when selection is performed using YI. Therefore, changes in the body structure of Nelore females can be obtained when MH and MW is used as a selection criterion for cows. © 2012 Institute of Plant Genetics, Polish Academy of Sciences, Poznan.
Resumo:
Este artigo apresenta uma aplicação do método para determinação espectrofotométrica simultânea dos íons divalentes de cobre, manganês e zinco à análise de medicamento polivitamínico/polimineral. O método usa 4-(2-piridilazo) resorcinol (PAR), calibração multivariada e técnicas de seleção de variáveis e foi otimizado o empregando-se o algoritmo das projeções sucessivas (APS) e o algoritmo genético (AG), para escolha dos comprimentos de onda mais informativos para a análise. Com essas técnicas, foi possível construir modelos de calibração por regressão linear múltipla (RLM-APS e RLM-AG). Os resultados obtidos foram comparados com modelos de regressão em componentes principais (PCR) e nos mínimos quadrados parciais (PLS). Demonstra-se a partir do erro médio quadrático de previsão (RMSEP) que os modelos apresentam desempenhos semelhantes ao prever as concentrações dos três analitos no medicamento. Todavia os modelos RLM são mais simples pois requerem um número muito menor de comprimentos de onda e são mais fáceis de interpretar que os baseados em variáveis latentes.