931 resultados para many-objective problems
Resumo:
Muitos dos problemas auditivos não são notados por pais e professores. Este fato prejudica a aprendizagem da criança principalmente no ambiente escolar. Por isso, programas de triagem auditiva podem ser utilizados com o intuito de detectar e, posteriormente, diagnosticar escolares a fim de que se possa prevenir ou minimizar o impacto a que possíveis sequelas auditivas venham prejudicar o rendimento escolar da criança. Hoje em dia podemos contar com programas que permitem o melhor acompanhamento de populações que necessitam de cuidados preventivos e curativos, e a audição é um aspecto muito importante que pode ser avaliado quando estes programas são colocados em prática. O Programa Nacional de Reorientação da Formação Profissional em Saúde (Pró-Saúde), que visou reorientar a formação profissional, teve como objetivo integrar ensino-serviço e promover atenção básica por meio da abordagem integral do processo saúde-doença. Ambientes externos podem ser utilizados por alunos e professores universitários para que possam colocadas em prática ações que possibilitem a humanização das práticas de atenção a saúde e a integralidade das mesmas, por meio da articulação de ações e serviços de saúde, preventivos e curativos, individuais e coletivos. A escola é considerada um dos ambientes que este trabalho pode ser realizado. O Programa Saúde na Escola (PSE) abre o ambiente escolar com a finalidade de contribuir para a formação integral dos estudantes da rede pública de educação básica por meio de ações de prevenção, promoção e atenção à saúde. Sendo um estudo do tipo retrospectivo transversal, como objetivo principal caracterizar o perfil audiológico de escolares de escola pública do município de Bauru SP, contando com a integração de profissionais da área da saúde e educação no ambiente escolar, o que teve como base os programas citados acima. A triagem auditiva foi realizada com a aplicação dos seguintes procedimentos: imitanciometria, inspeção visual do meato acústico externo, emissões otoacústicas por produto de distorção e audiometria tonal liminar. Observou-se que do total de 652 estudantes, a grande maioria (97,1%) dos participantes com faixa etária entre 10 e 18 anos, apresentaram audição normal. Em 2,9% desta população foi encontrada alguma alteração auditiva temporária. Com a exceção de um único participante, portador de perda auditiva sensorioneural. Apesar de encontrarmos muitas crianças e adolescentes com audição normal, o que mais ressalta a importância deste trabalho é a necessidade da triagem auditiva em ambientes escolares e, essencialmente, o acompanhamento das mesmas nesta faixa etária, já que são escassos os estudos referentes a ela. Apesar das poucas alterações auditivas encontradas serem passageiras, são exatamente estas que interferem no bom rendimento escolar e outros fatores.
Resumo:
Kozlov & Maz'ya (1989, Algebra Anal., 1, 144–170) proposed an alternating iterative method for solving Cauchy problems for general strongly elliptic and formally self-adjoint systems. However, in many applied problems, operators appear that do not satisfy these requirements, e.g. Helmholtz-type operators. Therefore, in this study, an alternating procedure for solving Cauchy problems for self-adjoint non-coercive elliptic operators of second order is presented. A convergence proof of this procedure is given.
Resumo:
Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.
Resumo:
Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.
Resumo:
Multi-objective problems may have many optimal solutions, which together form the Pareto optimal set. A class of heuristic algorithms for those problems, in this work called optimizers, produces approximations of this optimal set. The approximation set kept by the optmizer may be limited or unlimited. The benefit of using an unlimited archive is to guarantee that all the nondominated solutions generated in the process will be saved. However, due to the large number of solutions that can be generated, to keep an archive and compare frequently new solutions to the stored ones may demand a high computational cost. The alternative is to use a limited archive. The problem that emerges from this situation is the need of discarding nondominated solutions when the archive is full. Some techniques were proposed to handle this problem, but investigations show that none of them can surely prevent the deterioration of the archives. This work investigates a technique to be used together with the previously proposed ideas in the literature to deal with limited archives. The technique consists on keeping discarded solutions in a secondary archive, and periodically recycle these solutions, bringing them back to the optimization. Three methods of recycling are presented. In order to verify if these ideas are capable to improve the archive content during the optimization, they were implemented together with other techniques from the literature. An computational experiment with NSGA-II, SPEA2, PAES, MOEA/D and NSGA-III algorithms, applied to many classes of problems is presented. The potential and the difficulties of the proposed techniques are evaluated based on statistical tests.
Resumo:
In today’s society, IT-Companies often have a hard time estimating changed requirements. This leads to that the clients’ confidence is negatively affected and is one of the main reasons why this has to be improved. The goal with this study was to find out what the most common problems regarding this issue are in IT-companies that works with agile software development. By analyzing one IT-company through a SWOT- and pareto-analysis the most common problems have been ascertained. The SWOT analysis have been created through interviews with selected employees to get a better understanding of the problems that the IT-company is facing. Furthermore was the pareto-analysis based on a survey that was sent out to many different employees to prioritize the problems. The reason why the survey was sent to different employees was to get a more objective input. The study showed that there was many different problems that needed attention. The most important problems was that the communication towards the client regarding requirements needed to be improved, better communication internally between different departments needed to be established, a method to quickly adapt and estimate change in requirements needed to be implemented and finally a method regarding witch key employees whom need to attend the planning of the program backlog. These problems have then been studied through interviews with other IT-companies and through a literature study. The conclusions that where drawn was that the client needs to be involved and updated through the whole project. Constant monitoring and communication regarding changed requirements needs to be processed and mediated. High standards needs to be set early towards the client in order to obtain as clear an image of the requirements as possible. Many different parties need to attend to the planning process for the program backlog before the start of the project. The client needs to be aware of that changed requirements will arise and that this will lead to that the first estimation may not necessarily be absolute. As long as the client is held up to date as well as participant through the whole project and problems are detected and mediated early, change in requirements should not be a huge problem. This is after all the purpose of being agile.
Resumo:
The release of nitrogen compounds in water bodies can result in many environmental problems, so treat wastewater, such as sewage in order to remove not only organic matter but also nitrogen has been studied a few decades. From the above, the objective of this study was to evaluate the performance of a structured bed reactor, continuous flow, with recirculation, in removing organic matter and nitrogen present in wastewater under different cycles of intermittent aeration (AI) and to evaluate the influence of these cycles in the development of nitrifying bacteria (Oxidizing Bacteria Ammonia - BOA and Bacteria Oxidizing Nitrite - BON) and denitrifying (DESN) adhered (Support Material - MS) and suspension (Effluent - EF and sludge - LD). The reactor used has usable volume of 9.4 L. As support materials (MS) polyurethane foam was used, cut and fixed in PVC rods. 3 were worked aeration phases (AE) and non-aeration (AN) at different stage: Stage 1 (4 h EA / AN 2H); Stage 2 (2H EA / AN 1 h) and Phase 3 (2H EA / AN 2 h). During all hydraulic detention time phases was kept at 16 h and the effluent recirculated at a rate of 3 times the inflow. Were analyzed: pH, total alkalinity, temperature, chemical oxygen demand (COD), Biochemical Oxygen Demand (BOD), nitrogen Kjeldhl Total (NKT), ammonia-N-N-NH4+, nitrito-N-NO2+andnitrato-NO3-. The concentration of BOA, BON and DESN was determined using the number More Provável.gSSV-1 (NMP.gSSV-1). In phase 1 the percentage removal NTK N-NH4+ and NT was 76±10%, 70±21% and 67±10% respectively. In Phase 2 80±15% of removel NKT, 86±15% of N-NH4+ e 68±9% of removel NT e na Fase 3 de 58±20%, 72±28% and 41±6% of NKT, N-NH4+ of NT, respectively. The denitrification efficiency in stage 3 was over 70%, indicating that occurred in the reactor the process of simultaneous nitrification and denitrification (NDS). DQOT the removal percentages were 88 ± 4% in Phase 1, 94 ± 7 in Phase 2 and 90± 11% in Phase 3. The multivariate ANOVA applied to NMP.gSSV-1, it indicated that there was significant (F: 20,2, p <0,01) between the analyzed concentration of organisms AI in different cycles, but the differences between NMP.gSSV-1 depends not only isolated factors but of which means, and phase groups being analysis. From the results it is concluded that the working system is efficient in terms of nitrogen removal and organic matter, and that the stage with the highest availability of Dissolved Oxygen (DO) and C/N ratio (Step 2), was the one obtained the lower concentrations of organic matter effluents and N-NH4+. Hinted that there was a significant difference between the concentration (NMP.100mL-1) of the analyzed organizations (BOA, BON and DESN), but this difference does not depend on factors alone but of which means (MS, EF or LD), stages (1, 2 or 3) and groups (BOA, BON and DESN) is being considered.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Geociências, 2015.
Resumo:
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.
Resumo:
Background: Neutrophils are the most abundant leukocytes in peripheral blood and represent one of the most important elements of innate immunity. Recent subcellular proteomic studies have focused on the identification of human neutrophil proteins in various subcellular membrane and granular fractions. Although there are relatively few studies dealing with the analysis of the total extract of human neutrophils, many biological problems such as the role of chemokines, adhesion molecules, and other activating inputs involved in neutrophil responses and signaling can be approached on the basis of the identification of the total cellular proteins. Results: Using gel-LC-MS/MS, 251 total cellular proteins were identified from resting human neutrophils. This is more than ten times the number of proteins identified by an initial proteome analysis of human neutrophils and almost five times the number of proteins identified by the first 2-DE map of extracts of rat polymorphonuclear leukocytes. Most of the proteins identified in the present study are well-known, but some of them, such as neutrophil-secreted proteins and centaurin beta-1, a cytoplasmic protein involved in the regulation of NF-kappa B activity, are described here for the first-time. Conclusion: The present report provides new information about the protein content of human neutrophils. Importantly, our study resulted in the discovery of a series of proteins not previously reported to be associated with human neutrophils. These data are relevant to the investigation of comparative pathological states and models for novel classes of pharmaceutical drugs that could be useful in the treatment of inflammatory disorders in which neutrophils participate.
Resumo:
Direct and simultaneous observation of root growth and plant water uptake is difficult because soils are opaque. X-ray imaging techniques such as projection radiography or Computer Tomography (CT) offer a partial alternative to such limitations. Nevertheless, there is a trade-off between resolution, large field-of-view and 3-dimensionality: With the current state of the technology, it is possible to have any two. In this study, we used X-ray transmission through thin-slab systems to monitor transient saturation fields that develop around roots as plants grow. Although restricted to 2-dimensions, this approach offers a large field-of-view together with high spatial and dynamic resolutions. To illustrate the potential of this technology, we grew peas in 1 cm thick containers filled with soil and imaged them at regular intervals. The dynamics of both the root growth and the water content field that developed around the roots could be conveniently monitored. Compared to other techniques such as X-ray CT, our system is relatively inexpensive and easy to implement. It can potentially be applied to study many agronomic problems, such as issues related to the impact of soil constraints (physical, chemical or biological) on root development.
Resumo:
A qualidade do betão pode ser controlada pelo comportamento da fluidez da pasta de cimento, o qual está relacionado com a dispersão das partículas de cimento. Um dos maiores avanços na tecnologia do betão tem sido o desenvolvimento de aditivos. Um destes tipos de aditivos, os Superplastificantes (SP), fornecem a possibilidade de se obter uma melhor dispersão das partículas de cimento, produzindo pastas com elevada fluidez. Com o desenvolvimento de betões de alta resistência e elevado desempenho, os superplastificantes tornaram-se indispensáveis. Os superplastificantes são adsorvidos nas partículas de cimento e esta adsorção depende da composição do clínquer do cimento e do tipo de SP utilizado. Com a difusão do emprego dos aditivos redutores de água, têm surgido vários problemas de compatibilidade cimento/adjuvante. Esta investigação dedicada aos superplastificantes, fortes redutores de água, visou estudar quais as propriedades que poderiam influenciar a sua compatibilidade/robustez com o cimento. Também se procurou ganhar experiência com as técnicas analíticas de caracterização de adjuvantes. Assim, utilizou-se um tipo de cimento e dois tipos de superplastificantes (poli(étercarboxilatos) e poli(naftalenossulfonatos)) disponíveis no mercado português. Mantendo a mesma razão água/cimento (A/C), pretendeu-se determinar a natureza química, grau de funcionalização, teor e tipo de contra-ião, teor de sulfatos/sulfonatos do adjuvante e o comportamento dos superplastificantes nas pastas cimentícias, de forma a poder determinar indicadores de compatibilidade entre cimentos e superplastificantes. Constatou-se que a natureza química, o grau de funcionalização e a quantidade consumida dos superplastificantes têm influência nas pastas. Os indicadores de compatibilidade por parte dos superplastificantes parecem estar relacionados com o comprimento da cadeia lateral de éter e com o rácio CO2R/CO2 -. A alteração do momento da adição do adjuvante tem influência na compatibilidade cimento/adjuvante, sendo benéfico para os poli(étercarboxilatos) e prejudicial para o poli(naftalenossulfonato).
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.