70 resultados para Algoritmos experimentais


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The great importance in selecting the profile of an aircraft wing concerns the fact that its relevance in the performance thereof; influencing this displacement costs (fuel consumption, flight level, for example), the conditions of flight safety (response in critical condition) of the plane. The aim of this study was to examine the aerodynamic parameters that affect some types of wing profile, based on wind tunnel testing, to determine the aerodynamic efficiency of each one of them. We compared three types of planforms, chosen from considerations about the characteristics of the aircraft model. One of them has a common setup, and very common in laboratory classes to be a sort of standard aerodynamic, it is a symmetrical profile. The second profile shows a conFiguration of the concave-convex type, the third is also a concave-convex profile, but with different implementation of the second, and finally, the fourth airfoil profile has a plano-convex. Thus, three different categories are covered in profile, showing the main points of relevance to their employment. To perform the experiment used a wind tunnel-type open circuit, where we analyzed the pressure distribution across the surface of each profile. Possession of the drag polar of each wing profile can be, from the theoretical basis of this work, the aerodynamic characteristics relate to the expected performance of the experimental aircraft, thus creating a selection model with guaranteed performance aerodynamics. It is believed that the philosophy used in this dissertation research validates the results, resulting in an experimental alternative for reliable implementation of aerodynamic testing in models of planforms

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nonionic surfactants when in aqueous solution, have the property of separating into two phases, one called diluted phase, with low concentration of surfactant, and the other one rich in surfactants called coacervate. The application of this kind of surfactant in extraction processes from aqueous solutions has been increasing over time, which implies the need for knowledge of the thermodynamic properties of these surfactants. In this study were determined the cloud point of polyethoxylated surfactants from nonilphenolpolietoxylated family (9,5 , 10 , 11, 12 and 13), the family from octilphenolpolietoxylated (10 e 11) and polyethoxylated lauryl alcohol (6 , 7, 8 and 9) varying the degree of ethoxylation. The method used to determine the cloud point was the observation of the turbidity of the solution heating to a ramp of 0.1 ° C / minute and for the pressure studies was used a cell high-pressure maximum ( 300 bar). Through the experimental data of the studied surfactants were used to the Flory - Huggins models, UNIQUAC and NRTL to describe the curves of cloud point, and it was studied the influence of NaCl concentration and pressure of the systems in the cloud point. This last parameter is important for the processes of oil recovery in which surfactant in solution are used in high pressures. While the effect of NaCl allows obtaining cloud points for temperatures closer to the room temperature, it is possible to use in processes without temperature control. The numerical method used to adjust the parameters was the Levenberg - Marquardt. For the model Flory- Huggins parameter settings were determined as enthalpy of the mixing, mixing entropy and the number of aggregations. For the UNIQUAC and NRTL models were adjusted interaction parameters aij using a quadratic dependence with temperature. The parameters obtained had good adjust to the experimental data RSMD < 0.3 %. The results showed that both, ethoxylation degree and pressure increase the cloudy points, whereas the NaCl decrease

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research has revealed that the majority of Biology teachers believe the practice of experimental activities as a didactical means would be the solution for the improvement of the Biology teaching-learning process. There are, however, studies which signal the lack of efficiency in such practice lessons as far as building scientific knowledge is concerned. It is also said that despite the enthusiasm on the teachers‟ part, such classes are rarely taught in high school. Several studies point pedagogical difficulties as well as nonexistence of a minimal infrastructure needed in laboratories as cause of low frequency in experimental activities. The poor teacher performance in terms of planning and development of classes; the large number of students per class; lack of financial stimulus for teachers are other reasons to be taken into account among others, in which can also be included difficulties of epistemological nature. That means an unfavorable eye of the teacher towards experimental activities. Our study aimed to clarify if such scenario is generalized in high schools throughout the state of Rio Grande do Norte Brazil. During our investigation a sample of twenty teaching institutions were used. They were divided in two groups: in the first group, five IFRN- Instituto Federal de Educação, Ciência e Tecnologia do Rio Grande do Norte schools. Two of those in Natal, and the other three from the country side. The second group is represented by fifteen state schools belonging to the Natal metropolitan area. The objectives of the research were to label schools concerning laboratory facilities; to identify difficulties pointed by teachers when performing experiment classes, and to become familiar with the conceptions of the teachers in regarding biology experiment classes. To perform such task, a questionnaire was used as instrument of data collecting. It contained multiple choice, essay questions and a semi-structured interview with the assistance of a voice recorder. The data analysis and the in loco observation allowed the conclusion that the federal schools do present better facilities for the practice of experimental activities when compared to state schools. Another aspect pointed is the fact that teachers of federal schools have more time available for planning the experiments; they are also better paid and are given access a career development, which leads to better salaries. All those advantages however, do not show a significantly higher frequency regarding the development of experiments when compared to state school teachers. Both teachers of federal and state schools pointed infra-structure problems such as the availability of reactants, equipments and consumption supplies as main obstacle to the practice of experiments in biology classes. Such fact leads us to conclude that maybe there are other problems not covered by the questionnaire such as poor ability to plan and execute experimental activities. As far as conceptions about experimental activities, it was verified in the majority of the interviewees a inductive-empiric point of view of science possibly inherited during their academic formation and such point of view reflected on the way they plan and execute experiments with students

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El trabajo práctico experimental en la educación en Ciencias y en el contexto de la enseñanza de la Biología es un objetivo clave. Teniendo en cuenta la influencia de los libros de texto en la actividad profesional de docentes, interesado en este estudio para caracterizar la orientación de este tipo de materiales, para el trabajo práctico experimental y el uso de la medición en esta actividad. Para ello analizaron las ocho colecciones de libros de Biología aprobado en PNLD en 2012. La investigación es de naturaleza descriptiva e interpretativa y de recogida de datos ha sido elegida por el método de análisis de contenido (BARDIN, 2002). El análisis de los trabajos práctico experimentales buscó caracterizar su naturaleza epistemológica, concepción de la ciencia implícita, conceptual, tipología, contenido conceptual de Biología utilizado, cuántos implican medidas, y cómo se utiliza en este contexto según el procedimiento general para medir propuesto por Núñez y Silva (2008). Los trabajo práctico experimental ha sugerido una epistemología de la enseñanza conceptual, carácter racionalista en su mayoría y dominada por las actividades del tipo de ejercicio práctico, a necesidad de medir está presente en una minoría de estos y se se utiliza el procedimiento general de medición de forma parcial e implícito en la mayoría de los trabajos prático experimentales. Por lo tanto, proponemos en este actividades de estudio a desarrollar una guía de análisis de los trabajo práctico experimentales propuesto en los libros de texto de Biología

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we study and compare two percolation algorithms, one of then elaborated by Elias, and the other one by Newman and Ziff, using theorical tools of algorithms complexity and another algorithm that makes an experimental comparation. This work is divided in three chapters. The first one approaches some necessary definitions and theorems to a more formal mathematical study of percolation. The second presents technics that were used for the estimative calculation of the algorithms complexity, are they: worse case, better case e average case. We use the technique of the worse case to estimate the complexity of both algorithms and thus we can compare them. The last chapter shows several characteristics of each one of the algorithms and through the theoretical estimate of the complexity and the comparison between the execution time of the most important part of each one, we can compare these important algorithms that simulate the percolation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Federal do Rio Grande do Norte

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clustering data is a very important task in data mining, image processing and pattern recognition problems. One of the most popular clustering algorithms is the Fuzzy C-Means (FCM). This thesis proposes to implement a new way of calculating the cluster centers in the procedure of FCM algorithm which are called ckMeans, and in some variants of FCM, in particular, here we apply it for those variants that use other distances. The goal of this change is to reduce the number of iterations and processing time of these algorithms without affecting the quality of the partition, or even to improve the number of correct classifications in some cases. Also, we developed an algorithm based on ckMeans to manipulate interval data considering interval membership degrees. This algorithm allows the representation of data without converting interval data into punctual ones, as it happens to other extensions of FCM that deal with interval data. In order to validate the proposed methodologies it was made a comparison between a clustering for ckMeans, K-Means and FCM algorithms (since the algorithm proposed in this paper to calculate the centers is similar to the K-Means) considering three different distances. We used several known databases. In this case, the results of Interval ckMeans were compared with the results of other clustering algorithms when applied to an interval database with minimum and maximum temperature of the month for a given year, referring to 37 cities distributed across continents

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although some individual techniques of supervised Machine Learning (ML), also known as classifiers, or algorithms of classification, to supply solutions that, most of the time, are considered efficient, have experimental results gotten with the use of large sets of pattern and/or that they have a expressive amount of irrelevant data or incomplete characteristic, that show a decrease in the efficiency of the precision of these techniques. In other words, such techniques can t do an recognition of patterns of an efficient form in complex problems. With the intention to get better performance and efficiency of these ML techniques, were thought about the idea to using some types of LM algorithms work jointly, thus origin to the term Multi-Classifier System (MCS). The MCS s presents, as component, different of LM algorithms, called of base classifiers, and realized a combination of results gotten for these algorithms to reach the final result. So that the MCS has a better performance that the base classifiers, the results gotten for each base classifier must present an certain diversity, in other words, a difference between the results gotten for each classifier that compose the system. It can be said that it does not make signification to have MCS s whose base classifiers have identical answers to the sames patterns. Although the MCS s present better results that the individually systems, has always the search to improve the results gotten for this type of system. Aim at this improvement and a better consistency in the results, as well as a larger diversity of the classifiers of a MCS, comes being recently searched methodologies that present as characteristic the use of weights, or confidence values. These weights can describe the importance that certain classifier supplied when associating with each pattern to a determined class. These weights still are used, in associate with the exits of the classifiers, during the process of recognition (use) of the MCS s. Exist different ways of calculating these weights and can be divided in two categories: the static weights and the dynamic weights. The first category of weights is characterizes for not having the modification of its values during the classification process, different it occurs with the second category, where the values suffers modifications during the classification process. In this work an analysis will be made to verify if the use of the weights, statics as much as dynamics, they can increase the perfomance of the MCS s in comparison with the individually systems. Moreover, will be made an analysis in the diversity gotten for the MCS s, for this mode verify if it has some relation between the use of the weights in the MCS s with different levels of diversity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O método de combinação de Nelson-Oppen permite que vários procedimentos de decisão, cada um projetado para uma teoria específica, possam ser combinados para inferir sobre teorias mais abrangentes, através do princípio de propagação de igualdades. Provadores de teorema baseados neste modelo são beneficiados por sua característica modular e podem evoluir mais facilmente, incrementalmente. Difference logic é uma subteoria da aritmética linear. Ela é formada por constraints do tipo x − y ≤ c, onde x e y são variáveis e c é uma constante. Difference logic é muito comum em vários problemas, como circuitos digitais, agendamento, sistemas temporais, etc. e se apresenta predominante em vários outros casos. Difference logic ainda se caracteriza por ser modelada usando teoria dos grafos. Isto permite que vários algoritmos eficientes e conhecidos da teoria de grafos possam ser utilizados. Um procedimento de decisão para difference logic é capaz de induzir sobre milhares de constraints. Um procedimento de decisão para a teoria de difference logic tem como objetivo principal informar se um conjunto de constraints de difference logic é satisfatível (as variáveis podem assumir valores que tornam o conjunto consistente) ou não. Além disso, para funcionar em um modelo de combinação baseado em Nelson-Oppen, o procedimento de decisão precisa ter outras funcionalidades, como geração de igualdade de variáveis, prova de inconsistência, premissas, etc. Este trabalho apresenta um procedimento de decisão para a teoria de difference logic dentro de uma arquitetura baseada no método de combinação de Nelson-Oppen. O trabalho foi realizado integrando-se ao provador haRVey, de onde foi possível observar o seu funcionamento. Detalhes de implementação e testes experimentais são relatados

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 3D binary image is considered well-composed if, and only if, the union of the faces shared by the foreground and background voxels of the image is a surface in R3. Wellcomposed images have some desirable topological properties, which allow us to simplify and optimize algorithms that are widely used in computer graphics, computer vision and image processing. These advantages have fostered the development of algorithms to repair bi-dimensional (2D) and three-dimensional (3D) images that are not well-composed. These algorithms are known as repairing algorithms. In this dissertation, we propose two repairing algorithms, one randomized and one deterministic. Both algorithms are capable of making topological repairs in 3D binary images, producing well-composed images similar to the original images. The key idea behind both algorithms is to iteratively change the assigned color of some points in the input image from 0 (background)to 1 (foreground) until the image becomes well-composed. The points whose colors are changed by the algorithms are chosen according to their values in the fuzzy connectivity map resulting from the image segmentation process. The use of the fuzzy connectivity map ensures that a subset of points chosen by the algorithm at any given iteration is the one with the least affinity with the background among all possible choices

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Classifier ensembles are systems composed of a set of individual classifiers and a combination module, which is responsible for providing the final output of the system. In the design of these systems, diversity is considered as one of the main aspects to be taken into account since there is no gain in combining identical classification methods. The ideal situation is a set of individual classifiers with uncorrelated errors. In other words, the individual classifiers should be diverse among themselves. One way of increasing diversity is to provide different datasets (patterns and/or attributes) for the individual classifiers. The diversity is increased because the individual classifiers will perform the same task (classification of the same input patterns) but they will be built using different subsets of patterns and/or attributes. The majority of the papers using feature selection for ensembles address the homogenous structures of ensemble, i.e., ensembles composed only of the same type of classifiers. In this investigation, two approaches of genetic algorithms (single and multi-objective) will be used to guide the distribution of the features among the classifiers in the context of homogenous and heterogeneous ensembles. The experiments will be divided into two phases that use a filter approach of feature selection guided by genetic algorithm

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The course of Algorithms and Programming reveals as real obstacle for many students during the computer courses. The students not familiar with new ways of thinking required by the courses as well as not having certain skills required for this, encounter difficulties that sometimes result in the repetition and dropout. Faced with this problem, that survey on the problems experienced by students was conducted as a way to understand the problem and to guide solutions in trying to solve or assuage the difficulties experienced by students. In this paper a methodology to be applied in a classroom based on the concepts of Meaningful Learning of David Ausubel was described. In addition to this theory, a tool developed at UFRN, named Takkou, was used with the intent to better motivate students in algorithms classes and to exercise logical reasoning. Finally a comparative evaluation of the suggested methodology and traditional methodology was carried out, and results were discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In development of Synthetic Agents for Education, the doubt still resides about what would be a behavior that could be considered, in fact, plausible for this agent's type, which can be considered as effective on the transmission of the knowledge by the agent and the function of emotions this process. The purpose of this labor has an investigative nature in an attempt to discover what aspects are important for this behavior consistent and practical development of a chatterbot with the function of virtual tutor, within the context of learning algorithms. In this study, we explained the agents' basics, Intelligent Tutoring Systems, bots, chatterbots and how these systems need to provide credibility to report on their behavior. Models of emotions, personality and humor to computational agents are also covered, as well as previous studies by other researchers at the area. After that, the prototype is detailed, the research conducted, a summary of results achieved, the architectural model of the system, vision of computing and macro view of the features implemented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N  M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms