59 resultados para Solução de problemas - Métodos


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The literary critic Terry Eagleton obtained notoriety in academic circles when he was recognized intellectually for his bestselling book Literary Theory: An Introduction. In this book, the English author boldly proposes the end of literature and literary criticism. However, Eagleton proposed years before, in his book Criticism and Ideology (1976), a scientific system of analysis of literary texts, which seemed less radical, both in theory and in method, than in his later theoretical proposal. Based on this, the objective of this dissertation is to present the English literary critic´s initial method, explaining the reasons that led him to abandon his initial project - of develop a method of analysis of the literary text on a Marxist scientific perspective - and to propose, in the following years, in his most famous book and others, a revolutionary vision that would go beyond textual analysis and make literary texts have a practical intervention in society. Finally, we explain what would be his idea of revolutionary criticism

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The produce of waste and the amount of the water produced coming from activities of petroleum production and extraction has been a biggest challenge for oil companies with respect to environmental compliance due to toxicity. The discard or the reuse this effluent containing organic compounds as BTEX (benzene, toluene, ethylbenzene and xylene) can cause serious environmental and human health problems. Thus, the objective this paper was study the performance of two process (separately and sequential) in one synthetic effluent for the benzene, toluene and xylene removal (volatile hydrocarbons presents in the produced water) through of electrochemical treatment using Ti/Pt electrode and exchange resin ionic used in the adsorption process. The synthetic solution of BTX was prepared with concentration of 22,8 mg L-1, 9,7 mg L-1 e 9,0 mg L-1, respectively, in Na2SO4 0,1 mol L-1. The experiments was developed in batch with 0.3 L of solution at 25ºC. The electrochemical oxidation process was accomplished with a Ti/Pt electrode with different current density (J = 10, 20 e 30 mA.cm-2). In the adsorption process, we used an ionic exchange resin (Purolite MB 478), using different amounts of mass (2,5, 5 and 10 g). To verify the process of technics in the sequential treatment, was fixed the current density at 10 mA cm-2 and the resin weight was 2.5 g. Analysis of UV-VIS spectrophotometry, chemical oxygen demand (COD) and gas chromatography with selective photoionization detector (PID) and flame ionization (FID), confirmed the high efficiency in the removal of organic compounds after treatment. It was found that the electrochemical process (separate and sequential) is more efficient than absorption, reaching values of COD removal exceeding 70%, confirmed by the study of the cyclic voltammetry and polarization curves. While the adsorption (separately), the COD removal did not exceed 25,8%, due to interactions resin. However, the sequential process (electrochemical oxidation and adsorption) proved to be a suitable alternative, efficient and cost-effectiveness for the treatment of effluents petrochemical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deaf people have serious difficulties to access information. The support for sign languages is rarely addressed in Information and Communication Technologies (ICT). Furthermore, in scientific literature, there is a lack of works related to machine translation for sign languages in real-time and open-domain scenarios, such as TV. To minimize these problems, in this work, we propose a solution for automatic generation of Brazilian Sign Language (LIBRAS) video tracks into captioned digital multimedia contents. These tracks are generated from a real-time machine translation strategy, which performs the translation from a Brazilian Portuguese subtitle stream (e.g., a movie subtitle or a closed caption stream). Furthermore, the proposed solution is open-domain and has a set of mechanisms that exploit human computation to generate and maintain their linguistic constructions. Some implementations of the proposed solution were developed for digital TV, Web and Digital Cinema platforms, and a set of experiments with deaf users was developed to evaluate the main aspects of the solution. The results showed that the proposed solution is efficient and able to generate and embed LIBRAS tracks in real-time scenarios and is a practical and feasible alternative to reduce barriers of deaf to access information, especially when human interpreters are not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an optimization technique based on structural topology optimization methods, TOM, designed to solve problems of thermoelasticity 3D. The presented approach is based on the adjoint method of sensitivity analysis unified design and is intended to loosely coupled thermomechanical problems. The technique makes use of analytical expressions of sensitivities, enabling a reduction in the computational cost through the use of a coupled field adjoint equation, defined in terms the of temperature and displacement fields. The TOM used is based on the material aproach. Thus, to make the domain is composed of a continuous distribution of material, enabling the use of classical models in nonlinear programming optimization problem, the microstructure is considered as a porous medium and its constitutive equation is a function only of the homogenized relative density of the material. In this approach, the actual properties of materials with intermediate densities are penalized based on an artificial microstructure model based on the SIMP (Solid Isotropic Material with Penalty). To circumvent problems chessboard and reduce dependence on layout in relation to the final optimal initial mesh, caused by problems of numerical instability, restrictions on components of the gradient of relative densities were applied. The optimization problem is solved by applying the augmented Lagrangian method, the solution being obtained by applying the finite element method of Galerkin, the process of approximation using the finite element Tetra4. This element has the ability to interpolate both the relative density and the displacement components and temperature. As for the definition of the problem, the heat load is assumed in steady state, i.e., the effects of conduction and convection of heat does not vary with time. The mechanical load is assumed static and distributed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La enseñanza de problemas se ha investigado en la didáctica de las ciencias naturales como un medio importante para desarrollar el aprendizaje de los conocimientos científicos y la formación de competencias básicas. Dada la importancia de los libros de texto para la enseñanza de la ciencia, con el fin de verificar el enfoque de la enseñanza con problemas en los libros de química, se procedió a una investigación realizada en las obras aprobadas en PNLD 2012, basado en el método de Análisis de Contenido. Se analizó el contenido de la estructura atómica, como marco teórico la perspectiva de la enseñanza problémica, basada en el materialismo histórico y dialéctico. Metodológicamente la investigación presenta un carácter cualitativo. Los resultados del análisis de contenido corroboraron la cuestiones de estudio iniciales relacionadas con la explicación centrándose en los problemas, lo que permitió inferir la elaboración de una Unidad Didactica basada en los métodos problémicos para la enseñanza de los modelos atómicos por la exposición problémica, la conversación heurística y la busca parcial, como forma de aproximar los estudiantes a la naturaleza de las ciencias naturales y contribuir al desarrollo de actitudes positivas en el aprendizaje de la química

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we have elaborated a spline-based method of solution of inicial value problems involving ordinary differential equations, with emphasis on linear equations. The method can be seen as an alternative for the traditional solvers such as Runge-Kutta, and avoids root calculations in the linear time invariant case. The method is then applied on a central problem of control theory, namely, the step response problem for linear EDOs with possibly varying coefficients, where root calculations do not apply. We have implemented an efficient algorithm which uses exclusively matrix-vector operations. The working interval (till the settling time) was determined through a calculation of the least stable mode using a modified power method. Several variants of the method have been compared by simulation. For general linear problems with fine grid, the proposed method compares favorably with the Euler method. In the time invariant case, where the alternative is root calculation, we have indications that the proposed method is competitive for equations of sifficiently high order.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The intensification of the fear in the city and in the spaces controlled by this feeling has contributed to a growing socio-spatial inequality, and the rapid growth of market protection. The residential condos emerge as a possible solution to the problem. This is a housing typology expanding worldwide which is seen, especially by the urban middle class, as enablers of quality of life and safety. In Brazil, especially in large cities, the quest for quality of life is directly connected with the desire for security translated through space control (use of high walls, gates, entrance hall, security cameras) and people who use it. This thesis aims at investigating how the different categories of inhabitants of an area predominantly occupied by vertical residential condos realize the socio-spatial dimension and the socio-urban space determined by this type of development. It especially takes into consideration the issue of urban insecurity, based on the assumption that, although published and sold by marketing as safe places , synonym of welfare and supporters of community life , the living in these condos, may even inhibits, social relationships, contributing to socio-spatial isolation and consequent social weakness. This is a survey that seeks to meet the assumptions of Environmental Psychology towards the comprehension of person-environment studies, emphasizing the use of different methods (desk research, observations of and group interviews, focus group technique using photographic resources), as well as the focus on current problems of the urban scene and the knowledge gained in Social Psychology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims at identifying the learning problems in newly undergraduate students at university, interpreting the nature and causes of these problems, offering subsidies to overcome these difficulties and enabling a meaningful learning through which students give meaning to their learning. As an object for this work was chosen the theme Chemical Bonds - where were studied the forces between atoms to form molecules, compound ions and ionic crystalline structures - and is characterized as one of the most important subjects of Chemistry. In research, it was used a questionnaire with five open questions, answered by 147 students from the early periods of degrees in Chemistry at Universidade Federal do f Rio Grande do Norte. The answers revealed uncertainty on the part of students, both conceptual and representation, with superficial justifications, always using the octet rule to describe models of chemical bonds. Results suggest that these students had inadequate training in high school and that the examinations for entrance into the ranks were made according to flexible criteria less demanding in terms of knowledge. These observations have led to the conclusion that for future changes, it is necessary for high schools and in the early periods in universities favoring the adoption of pedagogical approaches in context and applying strategies to overcome the teaching of superficial memorization on Chemical Bonds, which probably have applied to the teaching of other subjects of chemistry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil contamination by pesticides is an environmental problem that needs to be monitored and avoided. However, the lack of fast, accurate and low cost analytical methods for discovering residual pesticide in complex matrices, such as soil, is a problem still unresolved. This problem needs to be solved before we are able to assess the quality of environmental samples. The intensive use of pesticides has increased since the 60s, because the dependence of their use, causing biological imbalances and promoting resistance and recurrence of high populations of pests and pathogens (upwelling). This has contributed to the appearance of new pests that were previously under natural control. To develop analytical methods that are able to quantify residues pesticide in complex environment. It is still a challenge for many laboratories. The integration of two analytical methods one ecotoxicological and another chemical demonstrates the potential for environmental analysis of methamidophos. The aim of this study was to evaluate an ecotoxicological method as "screening" analytical methamidophos in the soil and perform analytical confirmation in the samples of the concentration of the analyte by chemical method LC-MS/MS In this work we tested two soils: a clayey and sandy, both in contact with the kinetic methamidophos model followed pseudo-second order. The clay soil showed higher absorption of methamidophos and followed the Freundlich model, while the sandy, the Langmuir model. The chemical method was validated LC-MS/MS satisfactory, showing all parameters of linearity, range, precision, accuracy, and sensitivity adequate. In chronic ecotoxicological tests with C. dubia, the NOEC was 4.93 and 3.24 for ng L-1 of methamidophos to elutriate assays of sandy and clay soils, respectively. The method for ecotoxicological levels was more sensitive than LC-MS/MS detection of methamidophos, loamy and sandy soils. However, decreasing the concentration of the standard for analytical methamidophos and adjusting for the validation conditions chemical acquires a limit of quantification (LOQ) in ng L-1, consistent with the provisions of ecotoxicological test. The methods described should be used as an analytical tool for methamidophos in soil, and the ecotoxicological analysis can be used as a "screening" and LC-MS/MS as confirmatory analysis of the analyte molecule, confirming the objectives of this work