41 resultados para Uso do tempo

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using literature to discuss the topic of food, proper bourgeois cuisine, was the purpose of this work. As a corpus, we use one of the works of Eça de Queiroz, The City and the Mountains. Served as theoretical references the Claude Levi-Strauss s concept of universal culinary and the Jean Claude Fischler s concept of specific culinary who understands food as a cultural system which includes representations, beliefs and practices of a specific group. After the initial reading of the novel and construction of a file containing general information of the work, categories designed for elaboration of a material for analysis were these: work, characters, food, intellectuals and geographies. We realized the culinary as an epicenter for understanding the culture of a specific group: in this case, the bourgeois. We proposed a quaternary model for systematizing it: this bourgeois cuisine highlights the technique, has affection for what is rare and/or expensive but still consume it with temperance, establishing a new relationship with the use of time and, finally, it is the one that opens the ritual that involves frequent restaurants and cafes. The exercise of thinking the bourgeois cuisine through the literature suggests that the art may work on increase the comprehensive capabilities of nutritionists, professionals who deal with a complex object in your practice: the food

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the process of the salt production, the first the salt crystals formed are disposed of as industrial waste. This waste is formed basically by gypsum, composed of calcium sulfate dihydrate (CaSO4.2H2O), known as carago cru or malacacheta . After be submitted the process of calcination to produce gypsum (CaSO4.0,5H2O), can be made possible its application in cement industry. This work aims to optimize the time and temperature for the process of calcination of the gypsum (carago) for get beta plaster according to the specifications of the norms of civil construction. The experiments involved the chemical and mineralogical characterization of the gypsum (carago) from the crystallizers, and of the plaster that is produced in the salt industry located in Mossoró, through the following techniques: x-ray diffraction (XRD), x-ray fluorescence (FRX), thermogravimetric analysis (TG/DTG) and scanning electron microscopy (SEM) with EDS. For optimization of time and temperature of the process of calcination was used the planning three factorial with levels with response surfaces of compressive mechanical tests and setting time, according norms NBR-13207: Plasters for civil construction and x-ray diffraction of plasters (carago) beta obtained in calcination. The STATISTICA software 7.0 was used for the calculations to relate the experimental data for a statistical model. The process for optimization of calcination of gypsum (carago) occurred in the temperature range from 120° C to 160° C and the time in the range of 90 to 210 minutes in the oven at atmospheric pressure, it was found that with the increase of values of temperature of 160° C and time calcination of 210 minutes to get the results of tests of resistance to compression with values above 10 MPa which conform to the standard required (> 8.40) and that the X-ray diffractograms the predominance of the phase of hemidrato beta, getting a beta plaster of good quality and which is in accordance with the norms in force, giving a by-product of the salt industry employability in civil construction

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to examine how students perceives the factors that may influence them to attend a training course offered in the distance virtual learning environment (VLE) of the National School of Public Administration (ENAP). Thus, as theoretical basis it was used the Unified Theory of Acceptance and Use of Technology (UTAUT), the result of an integration of eight previous models which aimed to explain the same phenomenon (acceptance/use of information technology). The research approach was a quantitative and qualitative. To achieve the study objectives were made five semi-structured interviews and an online questionnaire (websurvey) in a valid sample of 101 public employees scattered throughout the country. The technique used to the analysis of quantitative data was the structural equation modeling (SEM), by the method of Partial Least Square Path Modeling (PLS-PM). To qualitative data was the thematic content analysis. Among the results, it was found that, in the context of public service, the degree whose the individual believes that the use of an AVA will help its performance at work (performance expectancy) is a factor to its intended use and also influence its use. Among the results, it was found that the belief which the public employee has in the use of a VLE as a way to improve the performance of his work (performance expectation) was determinant for its intended use that, in turn, influenced their use. It was confirmed that, under the voluntary use of technology, the general opinion of the student s social circle (social influence) has no effect on their intention to use the VLE. The effort expectancy and facilitating conditions were not directly related to the intended use and use, respectively. However, emerged from the students speeches that the opinions of their coworkers, the ease of manipulate the VLE, the flexibility of time and place of the distance learning program and the presence of a tutor are important to their intentions to do a distance learning program. With the results, it is expected that the managers of the distance learning program of ENAP turn their efforts to reduce the impact of the causes of non-use by those unwilling to adopt voluntarily the e-learning, and enhance the potentialities of distance learning for those who are already users

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development and study of detectors sensitive to flammable combustible and toxic gases at low cost is a crucial technology challenge to enable marketable versions to the market in general. Solid state sensors are attractive for commercial purposes by the strength and lifetime, because it isn t consumed in the reaction with the gas. In parallel, the use of synthesis techniques more viable for the applicability on an industrial scale are more attractive to produce commercial products. In this context ceramics with spinel structure were obtained by microwave-assisted combustion for application to flammable fuel gas detectors. Additionally, alternatives organic-reducers were employed to study the influence of those in the synthesis process and the differences in performance and properties of the powders obtained. The organic- reducers were characterized by Thermogravimetry (TG) and Derivative Thermogravimetry (DTG). After synthesis, the samples were heat treated and characterized by Fourier Transform Infrared Spectroscopy (FTIR), X-ray Diffraction (XRD), analysis by specific area by BET Method and Scanning Electron Microscopy (SEM). Quantification of phases and structural parameters were carried through Rietveld method. The methodology was effective to obtain Ni-Mn mixed oxides. The fuels influenced in obtaining spinel phase and morphology of the samples, however samples calcined at 950 °C there is just the spinel phase in the material regardless of the organic-reducer. Therefore, differences in performance are expected in technological applications when sample equal in phase but with different morphologies are tested

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oil wells subjected to cyclic steam injection present important challenges for the development of well cementing systems, mainly due to tensile stresses caused by thermal gradients during its useful life. Cement sheath failures in wells using conventional high compressive strength systems lead to the use of cement systems that are more flexible and/or ductile, with emphasis on Portland cement systems with latex addition. Recent research efforts have presented geopolymeric systems as alternatives. These cementing systems are based on alkaline activation of amorphous aluminosilicates such as metakaolin or fly ash and display advantageous properties such as high compressive strength, fast setting and thermal stability. Basic geopolymeric formulations can be found in the literature, which meet basic oil industry specifications such as rheology, compressive strength and thickening time. In this work, new geopolymeric formulations were developed, based on metakaolin, potassium silicate, potassium hydroxide, silica fume and mineral fiber, using the state of the art in chemical composition, mixture modeling and additivation to optimize the most relevant properties for oil well cementing. Starting from molar ratios considered ideal in the literature (SiO2/Al2O3 = 3.8 e K2O/Al2O3 = 1.0), a study of dry mixtures was performed,based on the compressive packing model, resulting in an optimal volume of 6% for the added solid material. This material (silica fume and mineral fiber) works both as an additional silica source (in the case of silica fume) and as mechanical reinforcement, especially in the case of mineral fiber, which incremented the tensile strength. The first triaxial mechanical study of this class of materials was performed. For comparison, a mechanical study of conventional latex-based cementing systems was also carried out. Regardless of differences in the failure mode (brittle for geopolymers, ductile for latex-based systems), the superior uniaxial compressive strength (37 MPa for the geopolymeric slurry P5 versus 18 MPa for the conventional slurry P2), similar triaxial behavior (friction angle 21° for P5 and P2) and lower stifness (in the elastic region 5.1 GPa for P5 versus 6.8 GPa for P2) of the geopolymeric systems allowed them to withstand a similar amount of mechanical energy (155 kJ/m3 for P5 versus 208 kJ/m3 for P2), noting that geopolymers work in the elastic regime, without the microcracking present in the case of latex-based systems. Therefore, the geopolymers studied on this work must be designed for application in the elastic region to avoid brittle failure. Finally, the tensile strength of geopolymers is originally poor (1.3 MPa for the geopolymeric slurry P3) due to its brittle structure. However, after additivation with mineral fiber, the tensile strength became equivalent to that of latex-based systems (2.3 MPa for P5 and 2.1 MPa for P2). The technical viability of conventional and proposed formulations was evaluated for the whole well life, including stresses due to cyclic steam injection. This analysis was performed using finite element-based simulation software. It was verified that conventional slurries are viable up to 204ºF (400ºC) and geopolymeric slurries are viable above 500ºF (260ºC)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advances in medicine, life expectancy of the world population has grown considerably in recent decades. Studies have been performed in order to maintain the quality of life through the development of new drugs and new surgical procedures. Biomaterials is an example of the researches to improve quality of life, and its use goes from the reconstruction of tissues and organs affected by diseases or other types of failure, to use in drug delivery system able to prolong the drug in the body and increase its bioavailability. Biopolymers are a class of biomaterials widely targeted by researchers since they have ideal properties for biomedical applications, such as high biocompatibility and biodegradability. Poly (lactic acid) (PLA) is a biopolymer used as a biomaterial and its monomer, lactic acid, is eliminated by the Krebs Cycle (citric acid cycle). It is possible to synthesize PLA through various synthesis routes, however, the direct polycondensation is cheaper due the use of few steps of polymerization. In this work we used experimental design (DOE) to produce PLAs with different molecular weight from the direct polycondensation of lactic acid, with characteristics suitable for use in drug delivery system (DDS). Through the experimental design it was noted that the time of esterification, in the direct polycondensation, is the most important stage to obtain a higher molecular weight. The Fourier Transform Infrared (FTIR) spectrograms obtained were equivalent to the PLAs available in the literature. Results of Differential Scanning Calorimetry (DSC) showed that all PLAs produced are semicrystalline with glass transition temperatures (Tgs) ranging between 36 - 48 °C, and melting temperatures (Tm) ranging from 117 to 130 °C. The PLAs molecular weight characterized from Size Exclusion Chromatography (SEC), varied from 1000 to 11,000 g/mol. PLAs obtained showed a fibrous morphology characterized by Scanning Electron Microscopy (SEM)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to investigate the relationship between free time for leisure and body composition of students in the crucial ninth year. (N = 228) of towns in the Midwest catarinense. We used the Adolescent Behavior Questionnaire of Santa Catarina (COMPAC) to assess lifestyle, considering that Active schoolchildren during the week, accumulated 300 or more minutes of moderate or vigorous physical activity (MVPA). Were used the time to 2 hours or more / day to determine the time of excessive use of TV and computer video game. To analyze body composition were used two criteria: the Body Mass Index (BMI) and sum of skinfolds (EDC). It was observed a proportion of 67.3% of girls and 68.7% of boys assets and more than 98% of students were using excessive TV time, computer and video game. In the classification by EDC, most of the boys showed great or low levels of body composition, while more than half of girls were classified at higher levels. As for BMI, most boys and girls had not overweight. Significant difference in the comparison of total minutes per week of MVPA reported between the groups, the second criterion of EDC and BMI for girls but not for BMI in boys. It is concluded that students with higher accumulation in minutes of MVPA showed better body composition indicators, but no significant difference was found when compared active groups with inactive, according to criteria used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes and analyzes various processes established and practiced by both groups about the socio-cultural objective (action) the measurement and timing, mobilized some socio-historical practices as the use of the gnômon of the sundial and reading and interpretation of movements celestial constellations in cultural contexts such as indigenous communities and fishermen in the state of Pará, Brazil. The Purpose of the study was to describe and analyze the mobilization of such practices in the socio-historical development of matrices for teaching concepts and skills related to geometric angles, similar triangles, symmetry and proportionality in the training of mathematics teachers. The record of the entire history of investigation into the socio-historical practice, the formative action was based on epistemological assumptions of education ethnomathematics proposed by Vergani (2000, 2007) and Ubiratan D'Ambrosio (1986, 1993, 1996, 2001, 2004) and Alain Bishop conceptions about mathematics enculturation. At the end of the study I present my views on the practices of contributions called socio-cultural and historical for school mathematics, to give meaning to the concept formation and teaching of students, especially the implications of Education Ethnomatematics proposed by Vergani (2000) for training of future teachers of mathematics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plongés dans le temps présent, les dessins humoristiques, par la capacité de représenter, de suggérer et de communiquer une idée, marquent présence à l école et dans la salle de classe. Caractérisés par l utilisation d éléments comiques, satiriques et irôniques, outre la nature persuasive, ces dessins possibilitent le lecteur de faire une lecture critique des événements sociaux et politiques de notre société. En tant que langage visuel, structuré dans les formes verbale et icônique, de même que par le caractère analogique de représentation, les dessins humoristiques constituent un excellent recours pédagogique. Toutefois, ils sont longtemps restés inaperçus par l école et, seul récemment, ils sont devenus objet d investigation de la part des historiens. Dans ce sens, nous nous sommes proposés, dans cette étude, à analyser l utilisation de ces dessins par les professeurs d histoire des écoles publiques nommées Centros Paraibanos de Educação Solidária (CEPES), de João Pessoa, capitale de l Etat de Paraíba, en vue d appréhender et de discuter la façon dont ces professeurs font usage de ces dessins dans leur pratique pédagogique. Par le moyen des actions des professeurs, conçues comme des arts de faire, selon Certeau, et par l identification des usages qui se caractérisent comme des tactiques, nous avons essayé de percevoir comment se réalise le rapport humour et histoire, en salle de classe. La systématisation, la catégorisation et la narration des pratiques pédagogiques observées ont été réalisées par l analyse des questionnaires et des interviews appliqués aux professeurs et élèves, ainsi qu à l observation des classes. Notre recherche s est fondée sur les théories de Roger Chartier et Michel de Certeau, dont les concepts de représentation et d appropriation, d usages et de tactiques nous ont aidé à comprendre la forme par laquelle les sujets incorporés au quotidien de la salle de classe se sont appropriés de la dimension imagétique à travers l humour. A partir des concepts d usage et d appropriation nous avons identifié dans les actions et les parlers, la façon dont les dessins humoristiques sont travaillés par les professeurs. Conçus comme des registres visuels qui relatent des questions sociales, politiques et économiques, ces dessins sont perçus comme des registres visuels qui relatent des questions sociales, politiques et économiques, identifant, ainsi, les adversités du présent, dans le monde social

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade, characterized by the vertiginous growth of the computers worldwide net, brought radical changes in the use of the information and communication. Internet s use at business world has been largely studied; however, few are the researches about the academic use of this technology, mainly if we take into consideration institutions of technologic education. In this context, this research made an analysis of internet s use in a technologic education institution in Brazil, analyzing, in particular, the Centro Federal de Educação Tecnológica do Rio Grande do Norte CEFET/RN for that standard use of this Information Technology (IT) tools and, at the same time, studying the determinant factors of this use. To reach the considered objectives, a survey research was effected, be given data collected daily through the research s questionnaire application to 150 teachers who answered a set of closed and scaled questions. The quantitative data were qualitatively analyzed, arriving a some significant results related to the standard use and the factors that influenced in the use of these Internet technologies, like: the age scale, the exposition s to the computer level, the area of academic graduation, the area of knowledge where acts and the title, exert significant influence in the academic use of Internet between the professors

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis proposes the specification and performance analysis of a real-time communication mechanism for IEEE 802.11/11e standard. This approach is called Group Sequential Communication (GSC). The GSC has a better performance for dealing with small data packets when compared to the HCCA mechanism by adopting a decentralized medium access control using a publish/subscribe communication scheme. The main objective of the thesis is the HCCA overhead reduction of the Polling, ACK and QoS Null frames exchanged between the Hybrid Coordinator and the polled stations. The GSC eliminates the polling scheme used by HCCA scheduling algorithm by using a Virtual Token Passing procedure among members of the real-time group to whom a high-priority and sequential access to communication medium is granted. In order to improve the reliability of the mechanism proposed into a noisy channel, it is presented an error recovery scheme called second chance algorithm. This scheme is based on block acknowledgment strategy where there is a possibility of retransmitting when missing real-time messages. Thus, the GSC mechanism maintains the real-time traffic across many IEEE 802.11/11e devices, optimized bandwidth usage and minimal delay variation for data packets in the wireless network. For validation purpose of the communication scheme, the GSC and HCCA mechanisms have been implemented in network simulation software developed in C/C++ and their performance results were compared. The experiments show the efficiency of the GSC mechanism, especially in industrial communication scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predictive control technique has gotten, on the last years, greater number of adepts in reason of the easiness of adjustment of its parameters, of the exceeding of its concepts for multi-input/multi-output (MIMO) systems, of nonlinear models of processes could be linearised around a operating point, so can clearly be used in the controller, and mainly, as being the only methodology that can take into consideration, during the project of the controller, the limitations of the control signals and output of the process. The time varying weighting generalized predictive control (TGPC), studied in this work, is one more an alternative to the several existing predictive controls, characterizing itself as an modification of the generalized predictive control (GPC), where it is used a reference model, calculated in accordance with parameters of project previously established by the designer, and the application of a new function criterion, that when minimized offers the best parameters to the controller. It is used technique of the genetic algorithms to minimize of the function criterion proposed and searches to demonstrate the robustness of the TGPC through the application of performance, stability and robustness criterions. To compare achieves results of the TGPC controller, the GCP and proportional, integral and derivative (PID) controllers are used, where whole the techniques applied to stable, unstable and of non-minimum phase plants. The simulated examples become fulfilled with the use of MATLAB tool. It is verified that, the alterations implemented in TGPC, allow the evidence of the efficiency of this algorithm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables