25 resultados para generalization

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It presents a study about the control of constitutionality, its requirements and beddings. It brings, at first, notions about the concept of constitution, in its most varied aspects, as well as the systems of Control of Constitutionality. It emphasizes, considering the actual Brazilian situation, which passes through constitutional reforms and, therefore, assenting the appearance of an enormous amount of ordinary laws, the legal instability that has formed itself within the national panoram. Because of this situation, the institute of the Control of Constitutionality gains inmportancy as a way of protection of our Great Letter, against possible violations which can unfortunately happen. More ahead in the difuse control of constitutionality argues the new trend of generalization, especially after the recent reform that introduced the general repercussion as new requirement of admissibilidade of the appeal to the Brazilian Supreme Court. In the final chapter brings an analysis on the institute of amicus curiae, arguing its historical origins and its evolution, in the comparative jurisprudence, and the Brazilian right. From then is gone deep the paper of amicus curiae in the constitutionality control and, after quarrel on the difficulties of the Brazilian population to materialize its right before the judiciary, as this new institute could contribute in basic way for the materialization of the constitutional rule of access to justice

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The following study proposes an analysis of the politic process which the brazilian constitutional justice faces, emphasizing the Supremo Tribunal Federal . For that purpose, we start by examining the intimate relationship between Politics and Law, in view of the most recent social systems theories, so that the political system is distinguished by the exclusiveness of using the physical force, intending to make coletive tying decisions, and the juridical system as a congruent generalization of the expectations towards the rules and principles, brought together under an interdependence by which both gather legitimacy and effectiveness. In this manner we can notice the political effects of the constitutional interpretation conducted by Judges as well as by other juridical professionals, because these ones decrease the overload of expectations which are pointed to the Judicature. Constitutional interpretation is democratized since the participative democracy arises and stablishes a permanent state of awareness around the exercise of power and favours the preservation of the pluralism (counter-majoritary principle) where we can find the origin of the democratic nature of constitutional courts, once, in most cases, their members are not elected by the people. After that, we analyse the historical posture of the Supremo Tribunal Federal as a constitutional court in Brazil, so we can realize the attempts to make it vulnerable to the appeals of governability and economical aims, agains which this court somehow has resisted, stressing its particularities. At the end, it s concluded that even the so-called acts of government, whose judiciary control is mostly repelled, are subjected to a constitutional analysis, last frontier to be explored by the Supremo Tribunal Federal in its role of exposing our republican Constitution

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims to analyze the historical and epistemological development of the Group concept related to the theory on advanced mathematical thinking proposed by Dreyfus (1991). Thus it presents pedagogical resources that enable learning and teaching of algebraic structures as well as propose greater meaning of this concept in mathematical graduation programs. This study also proposes an answer to the following question: in what way a teaching approach that is centered in the Theory of Numbers and Theory of Equations is a model for the teaching of the concept of Group? To answer this question a historical reconstruction of the development of this concept is done on relating Lagrange to Cayley. This is done considering Foucault s (2007) knowledge archeology proposal theoretically reinforced by Dreyfus (1991). An exploratory research was performed in Mathematic graduation courses in Universidade Federal do Pará (UFPA) and Universidade Federal do Rio Grande do Norte (UFRN). The research aimed to evaluate the formation of concept images of the students in two algebra courses based on a traditional teaching model. Another experience was realized in algebra at UFPA and it involved historical components (MENDES, 2001a; 2001b; 2006b), the development of multiple representations (DREYFUS, 1991) as well as the formation of concept images (VINNER, 1991). The efficiency of this approach related to the extent of learning was evaluated, aiming to acknowledge the conceptual image established in student s minds. At the end, a classification based on Dreyfus (1991) was done relating the historical periods of the historical and epistemological development of group concepts in the process of representation, generalization, synthesis, and abstraction, proposed here for the teaching of algebra in Mathematics graduation course

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been remarkable among the Science Teaching debates the necessity that students do not learn only theories, laws and concepts, but also develop skills which allows them to act towards a critical citizenship. Therefore, some of the skills for the natural sciences learning must be taught consciously, intentionally and in a planned way, as component of a basic competence. Studies of the last twenty years have shown that students and teachers have plenty of difficulties about skills development and, among several, the skill of interpreting Cartesian graphics, essential for the comprehension of Natural Science. In that sense, the development of that type of professional knowledge during the initial education of future Chemistry teachers has become strategic, not only because they need to know how to use it, but also because they need to know how to teach it. This research has as its general objective the organization, development and study of a process of formation of the skill of interpreting Cartesian graphics as part of the teachers professional knowledge. It has been accomplished through a formative experience with six undergraduate students of the Teaching Degree Course of Chemistry of Universidade Federal do Rio Grande do Norte (UFRN Federal University of Rio Grande do Norte), in Brazil. In order to develop that skill, we have used as reference P. Ya. Galperin s Theory of the Stepwise Formation of Mental Actions and Concepts and its following qualitative indicators: action form, degree of generalization, degree of consciousness, degree of independence and degree of solidness. The research, in a qualitative approach, has prioritized as instruments of data collecting the registering of the activities of the undergraduate students, the observation, the questionnaire and the diagnosis tests. At the first moment, a teaching framework has been planned for the development of the skill of interpreting Cartesian graphics based on the presupposed conceptions and steps of Galperin s Theory. At the second moment, the referred framework has been applied and the process of the skill formation has been studied. The results have shown the possibility of develop the skill conscious about the invariant operation system, with a high degree of generalization and internalized the operational invariant in the mental plane. The students have attested the contributions at that type of formative experience. The research reveals the importance of going deeper about the teaching comprehension of the individualities tied to the process of internalization, according to Galperin s Theory, when the update of abilities as part of the teaching professional knowledge is the issue

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ART networks present some advantages: online learning; convergence in a few epochs of training; incremental learning, etc. Even though, some problems exist, such as: categories proliferation, sensitivity to the presentation order of training patterns, the choice of a good vigilance parameter, etc. Among the problems, the most important is the category proliferation that is probably the most critical. This problem makes the network create too many categories, consuming resources to store unnecessarily a large number of categories, impacting negatively or even making the processing time unfeasible, without contributing to the quality of the representation problem, i. e., in many cases, the excessive amount of categories generated by ART networks makes the quality of generation inferior to the one it could reach. Another factor that leads to the category proliferation of ART networks is the difficulty of approximating regions that have non-rectangular geometry, causing a generalization inferior to the one obtained by other methods of classification. From the observation of these problems, three methodologies were proposed, being two of them focused on using a most flexible geometry than the one used by traditional ART networks, which minimize the problem of categories proliferation. The third methodology minimizes the problem of the presentation order of training patterns. To validate these new approaches, many tests were performed, where these results demonstrate that these new methodologies can improve the quality of generalization for ART networks

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In last decades, neural networks have been established as a major tool for the identification of nonlinear systems. Among the various types of networks used in identification, one that can be highlighted is the wavelet neural network (WNN). This network combines the characteristics of wavelet multiresolution theory with learning ability and generalization of neural networks usually, providing more accurate models than those ones obtained by traditional networks. An extension of WNN networks is to combine the neuro-fuzzy ANFIS (Adaptive Network Based Fuzzy Inference System) structure with wavelets, leading to generate the Fuzzy Wavelet Neural Network - FWNN structure. This network is very similar to ANFIS networks, with the difference that traditional polynomials present in consequent of this network are replaced by WNN networks. This paper proposes the identification of nonlinear dynamical systems from a network FWNN modified. In the proposed structure, functions only wavelets are used in the consequent. Thus, it is possible to obtain a simplification of the structure, reducing the number of adjustable parameters of the network. To evaluate the performance of network FWNN with this modification, an analysis of network performance is made, verifying advantages, disadvantages and cost effectiveness when compared to other existing FWNN structures in literature. The evaluations are carried out via the identification of two simulated systems traditionally found in the literature and a real nonlinear system, consisting of a nonlinear multi section tank. Finally, the network is used to infer values of temperature and humidity inside of a neonatal incubator. The execution of such analyzes is based on various criteria, like: mean squared error, number of training epochs, number of adjustable parameters, the variation of the mean square error, among others. The results found show the generalization ability of the modified structure, despite the simplification performed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation contributes for the development of methodologies through feed forward artificial neural networks for microwave and optical devices modeling. A bibliographical revision on the applications of neuro-computational techniques in the areas of microwave/optical engineering was carried through. Characteristics of networks MLP, RBF and SFNN, as well as the strategies of supervised learning had been presented. Adjustment expressions of the networks free parameters above cited had been deduced from the gradient method. Conventional method EM-ANN was applied in the modeling of microwave passive devices and optical amplifiers. For this, they had been proposals modular configurations based in networks SFNN and RBF/MLP objectifying a bigger capacity of models generalization. As for the training of the used networks, the Rprop algorithm was applied. All the algorithms used in the attainment of the models of this dissertation had been implemented in Matlab

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reinforcement learning is a machine learning technique that, although finding a large number of applications, maybe is yet to reach its full potential. One of the inadequately tested possibilities is the use of reinforcement learning in combination with other methods for the solution of pattern classification problems. It is well documented in the literature the problems that support vector machine ensembles face in terms of generalization capacity. Algorithms such as Adaboost do not deal appropriately with the imbalances that arise in those situations. Several alternatives have been proposed, with varying degrees of success. This dissertation presents a new approach to building committees of support vector machines. The presented algorithm combines Adaboost algorithm with a layer of reinforcement learning to adjust committee parameters in order to avoid that imbalances on the committee components affect the generalization performance of the final hypothesis. Comparisons were made with ensembles using and not using the reinforcement learning layer, testing benchmark data sets widely known in area of pattern classification

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The static and cyclic assays are common to test materials in structures.. For cycling assays to assess the fatigue behavior of the material and thereby obtain the S-N curves and these are used to construct the diagrams of living constant. However, these diagrams, when constructed with small amounts of S-N curves underestimate or overestimate the actual behavior of the composite, there is increasing need for more testing to obtain more accurate results. Therewith, , a way of reducing costs is the statistical analysis of the fatigue behavior. The aim of this research was evaluate the probabilistic fatigue behavior of composite materials. The research was conducted in three parts. The first part consists of associating the equation of probability Weilbull equations commonly used in modeling of composite materials S-N curve, namely the exponential equation and power law and their generalizations. The second part was used the results obtained by the equation which best represents the S-N curves of probability and trained a network to the modular 5% failure. In the third part, we carried out a comparative study of the results obtained using the nonlinear model by parts (PNL) with the results of a modular network architecture (MN) in the analysis of fatigue behavior. For this we used a database of ten materials obtained from the literature to assess the ability of generalization of the modular network as well as its robustness. From the results it was found that the power law of probability generalized probabilistic behavior better represents the fatigue and composites that although the generalization ability of the MN that was not robust training with 5% failure rate, but for values mean the MN showed more accurate results than the PNL model

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Expanded Bed Adsorption (EBA) is an integrative process that combines concepts of chromatography and fluidization of solids. The many parameters involved and their synergistic effects complicate the optimization of the process. Fortunately, some mathematical tools have been developed in order to guide the investigation of the EBA system. In this work the application of experimental design, phenomenological modeling and artificial neural networks (ANN) in understanding chitosanases adsorption on ion exchange resin Streamline® DEAE have been investigated. The strain Paenibacillus ehimensis NRRL B-23118 was used for chitosanase production. EBA experiments were carried out using a column of 2.6 cm inner diameter with 30.0 cm in height that was coupled to a peristaltic pump. At the bottom of the column there was a distributor of glass beads having a height of 3.0 cm. Assays for residence time distribution (RTD) revelead a high degree of mixing, however, the Richardson-Zaki coefficients showed that the column was on the threshold of stability. Isotherm models fitted the adsorption equilibrium data in the presence of lyotropic salts. The results of experiment design indicated that the ionic strength and superficial velocity are important to the recovery and purity of chitosanases. The molecular mass of the two chitosanases were approximately 23 kDa and 52 kDa as estimated by SDS-PAGE. The phenomenological modeling was aimed to describe the operations in batch and column chromatography. The simulations were performed in Microsoft Visual Studio. The kinetic rate constant model set to kinetic curves efficiently under conditions of initial enzyme activity 0.232, 0.142 e 0.079 UA/mL. The simulated breakthrough curves showed some differences with experimental data, especially regarding the slope. Sensitivity tests of the model on the surface velocity, axial dispersion and initial concentration showed agreement with the literature. The neural network was constructed in MATLAB and Neural Network Toolbox. The cross-validation was used to improve the ability of generalization. The parameters of ANN were improved to obtain the settings 6-6 (enzyme activity) and 9-6 (total protein), as well as tansig transfer function and Levenberg-Marquardt training algorithm. The neural Carlos Eduardo de Araújo Padilha dezembro/2013 9 networks simulations, including all the steps of cycle, showed good agreement with experimental data, with a correlation coefficient of approximately 0.974. The effects of input variables on profiles of the stages of loading, washing and elution were consistent with the literature

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The great majority of analytical models for extragalactic radio sources suppose self-similarity and can be classified into three types: I, II and III. We have developed a model that represents a generalization of most models found in the literature and showed that these three types are particular cases. The model assumes that the area of the head of the jet varies with the jet size according to a power law and the jet luminosity is a function of time. As it is usually done, the basic hypothesis is that there is an equilibrium between the pressure exerted both by the head of the jet and the cocoon walls and the ram pressure of the ambient medium. The equilibrium equations and energy conservation equation allow us to express the size and width of the source and the pressure in the cocoon as a power law and find the respective exponents. All these assumptions can be used to calculate the evolution of the source size, width and radio luminosity. This can then be compared with the observed width-size relation for radio lobes and the power-size (P-D) diagram of both compact (GPS and CSS) and extended sources from the 3CR catalogue. In this work we introduce two important improvement as compared with a previous work: (1)We have put together a larger sample of both compact and extended radio sources

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is a detailed study of self-similar models for the expansion of extragalactic radio sources. A review is made of the definitions of AGN, the unified model is discussed and the main characteristics of double radio sources are examined. Three classification schemes are outlined and the self-similar models found in the literature are studied in detail. A self-similar model is proposed that represents a generalization of the models found in the literature. In this model, the area of the head of the jet varies with the size of the jet with a power law with an exponent γ. The atmosphere has a variable density that may or may not be spherically symmetric and it is taken into account the time variation of the cinematic luminosity of the jet according to a power law with an exponent h. It is possible to show that models Type I, II and III are particular cases of the general model and one also discusses the evolution of the sources radio luminosity. One compares the evolutionary curves of the general model with the particular cases and with the observational data in a P-D diagram. The results show that the model allows a better agreement with the observations depending on the appropriate choice of the model parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Double radio sources have been studied since the discovery of extragalactic radio sources in the decade of 1930. Since then, several numerical studies and analytical models have been proposed seeking a better understanding of the physical phenomena that determines the origin and evolution of such objects. In this thesis, we intended to study the evolution problem of the double radio sources in two fronts: in the ¯rst we have developed an analytical self-similar model that represents a generalization of most models found in the literature and solve some existent problems related to the jet head evolution. We deal with this problem using samples of hot spot sizes to ¯nd a power law relation between the jet head dimension and the source length. Using our model, we were able to draw the evolution curves of the double sources in a PD diagram for both compact sources (GPS and CSS) and extended sources of the 3CR catalogue. We have alson developed a computation tool that allows us to generate synthetic radio maps of the double sources. The objective is to determine the principal physical parameters of those objects by comparing synthetic and observed radio maps. In the second front, we used numeric simulations to study the interaction of the extra- galactic jets with the environment. We simulated situations where the jet propagates in a medium with high density contrast gas clouds capable to block the jet forward motion, forming the distorted structures observed in the morphology of real sources. We have also analyzed the situation in which the jet changes its propagation direction due to a change of the source main axis, creating the X-shaped sources. The comparison between our simulations and the real double radio sources, enable us to determine the values of the main physical parameters responsible for the distortions observed in those objects

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ising and m-vector spin-glass models are studied, in the limit of infinite-range in-teractions, through the replica method. First, the m-vector spin glass, in the presence of an external uniform magnetic field, as well as of uniaxial anisotropy fields, is consi-dered. The effects of the anisotropics on the phase diagrams, and in particular, on the Gabay-Toulouse line, which signals the transverse spin-glass ordering, are investigated. The changes in the Gabay-Toulouse line, due to the presence of anisotropy fields which favor spin orientations along the Cartesian axes (m = 2: planar anisotropy; m = 3: cubic anisotropy), are also studied. The antiferromagnetic Ising spin glass, in the presence of uniform and Gaussian random magnetic fields, is investigated through a two-sublattice generalization of the Sherrington-Kirpaktrick model. The effects of the magnetic-field randomness on the phase diagrams of the model are analysed. Some confrontations of the present results with experimental observations available in the literature are discussed