25 resultados para Auto-représentation
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The current study presents the characteristics of self-efficacy of students of Administration course, who work and do not work. The study was conducted through a field research, descriptive, addressed quantitatively using statistical procedures. Was studied a population composed of 394 students distributed in three Higher Education Institutions, in the metropolitan region of Belém, in the State of Pará. The sampling was not probabilistic by accessibility, with a sample of 254 subjects. The instrument for data collection was a questionnaire composed of a set of questions divided into three sections: the first related to sociodemographic data, the second section was built to identify the work situation of the respondent and the third section was built with issues related to General Perceived Self-Efficacy Scale proposed by Schwarzer and Jerusalem (1999). Sociodemographic data were processed using methods of descriptive statistics. This procedure allowed characterizing the subjects of the sample. To identify the work situation, the analysis of frequency and percentage was used, which allowed to classify in percentage, the respondents who worked and those that did not work, and the data related to the scale of self-efficacy were processed quantitatively by the method of multivariate statistics using the software of program Statistical Package for Social Sciences for Windows - SPSS, version 17 from the process of Exploratory Factor Analysis. This procedure allowed characterizing the students who worked and the students who did not worked. The results were discussed based on Social Cognitive Theory from the construct of self-efficacy of Albert Bandura (1977). The study results showed a young sample, composed the majority of single women with work experience, and indicated that the characteristics of self-efficacy of students who work and students who do not work are different. The self-efficacy beliefs of students who do not work are based on psychological expectations, whereas the students who work demonstrated that their efficacy beliefs are sustained by previous experiences. A student who does not work proved to be reliant in their abilities to achieve a successful performance in their activities, believing it to be easy to achieve your goals and to face difficult situations at work, simply by invest a necessary effort and trust in their abilities. One who has experience working proved to be reliant in their abilities to conduct courses of action, although know that it is not easy to achieve your goals, and in unexpected situations showed its ability to solve difficult problems
Resumo:
The present work analyzes the fast evolution of gated communities in Natal-RN´s urban space. Characterized by the occupation of large areas, providing private security and utilities, this kind of real estate use arises a long list of questions and issues from society and scholars, due to privatization of urban space, bending of law constraints and the lack of an integrated planning of the cities where they are built. The reasons for its fast growth in Brazil s urban areas are analyzed, considering the impact on formal urban planning and municipal services and on the identification of urbanistic, architectural pattern and constraints, as well as legal, social and economic issues. This study is based on the detailed analysis of the first three units of gated communities built in the urban space in Natal, between 1995 and 2003, including their evolution throughout time and the specific social and economic reasons for its present widespread adoption in Brazilian real estate market and, particulary, in our city. The main objective of this piece of work is to answer the why s and how s these phenomena evolved, setting a basis for the definition of adequate public policies and regulation of this kind of urban land use
Resumo:
A self-flotator vibrational prototype electromechanical drive for treatment of oil and water emulsion or like emulsion is presented and evaluated. Oil production and refining to obtain derivatives is carried out under arrangements technically referred to as on-shore and off-shore, ie, on the continent and in the sea. In Brazil 80 % of the petroleum production is taken at sea and area of deployment and it cost scale are worrisome. It is associated, oily water production on a large scale, carrier 95% of the potential pollutant of activity whose final destination is the environment medium, terrestrial or maritime. Although diversified set of techniques and water treatment systems are in use or research, we propose an innovative system that operates in a sustainable way without chemical additives, for the good of the ecosystem. Labyrinth adsor-bent is used in metal spirals, and laboratory scale flow. Equipment and process patents are claimed. Treatments were performed at different flow rates and bands often monitored with control systems, some built, other bought for this purpose. Measurements of the levels of oil and grease (OGC) of efluents treaty remained within the range of legal framework under test conditions. Adsorbents were weighed before and after treatment for obtaining oil impregna-tion, the performance goal of vibratory action and treatment as a whole. Treatment technolo-gies in course are referenced, to compare performance, qualitatively and quantitatively. The vibration energy consumption is faced with and without conventional flotation and self-flotation. There are good prospects for the proposed, especially in reducing the residence time, by capillary action system. The impregnation dimensionless parameter was created and confronted with consecrated dimensionless parameters, on the vibrational version, such as Weber number and Froude number in quadratic form, referred to as vibrational criticality. Re-sults suggest limits to the vibration intensity
Resumo:
The object of study of this thesis is the use of (self)training workshops as a fundamental process for the constitution of the teaching subject in mathematics education. The central purposes of the study were to describe and analyze a learning process of mathematics teachers supported by the training-research methodology, which procedures have been affected with the practice of (self)training workshops as a way of collaborating to the constitution of the teaching subject in Mathematics Education. The survey was conducted with a group of teachers in the city of Nova Cruz, Rio Grande do Norte through a process of continued education realized in the training workshops having as main goal the realization of the group s (self)training sessions in order to lead participants to the extent of their autonomy in their personal and professional transformations. The results obtained in the formative processes have shown the need to develop activities of mathematics teaching as a contribution to overcome the conceptual difficulties of the teachers, apart from their (self)reflections about themselves and the educational processes in which they belong. The results raised some propositions about (self)training workshops that may be incurred in practices to be included in the curriculum frameworks or materialize as a strategy of pedagogical work in training courses for teachers of mathematics. Also, they can constitute an administrative and educational activity to be instituted in the public schools of Basic Education
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
This work presents a set of intelligent algorithms with the purpose of correcting calibration errors in sensors and reducting the periodicity of their calibrations. Such algorithms were designed using Artificial Neural Networks due to its great capacity of learning, adaptation and function approximation. Two approaches willbe shown, the firstone uses Multilayer Perceptron Networks to approximate the many shapes of the calibration curve of a sensor which discalibrates in different time points. This approach requires the knowledge of the sensor s functioning time, but this information is not always available. To overcome this need, another approach using Recurrent Neural Networks was proposed. The Recurrent Neural Networks have a great capacity of learning the dynamics of a system to which it was trained, so they can learn the dynamics of a sensor s discalibration. Knowingthe sensor s functioning time or its discalibration dynamics, it is possible to determine how much a sensor is discalibrated and correct its measured value, providing then, a more exact measurement. The algorithms proposed in this work can be implemented in a Foundation Fieldbus industrial network environment, which has a good capacity of device programming through its function blocks, making it possible to have them applied to the measurement process
Resumo:
Ceramic substrates have been investigated by researchers around the world and has achieved a high interest in the scientific community, because they had high dielectric constants and excellent performance in the structures employed. Such ceramics result in miniaturized structures with dimensions well reduced and high radiation efficiency. In this work, we have used a new ceramic material called lead zinc titanate in the form of Zn0,8Pb0,2TiO3, capable of being used as a dielectric substrate in the construction of various structures of antennas. The method used in constructing the ceramic combustion synthesis was Self- Sustained High Temperature (SHS - "Self-Propagating High-Temperature Synthesis") which is defined as a process that uses highly exothermic reactions to produce various materials. Once initiated the reaction area in the reaction mixture, the heat generated is sufficient to become self-sustaining combustion in the form of a wave that propagates converting the reaction mixture into the product of interest. Were analyzed aspects of the formation of the composite Zn0,8Pb0,2TiO3 by SHS powders and characterized. The analysis consisted of determining the parameters of the reaction for the formation of the composite, as the ignition temperature and reaction mechanisms. The production of composite Zn0,8Pb0,2TiO3 by SHS performed in the laboratory, was the result of a total control of combustion temperature and after obtaining the powder began the development of ceramics. The product was obtained in the form of regular, alternating layers of porous ceramics and was obtained by uniaxial pressing. 10 The product was characterized by analysis of dilatometry, X-ray diffraction analysis and scanning electron microscopy. One of the contributions typically defined in this work is the development of a new dielectric material, nevertheless presented previously in the literature. Therefore, the structures of the antennas presented in this work consisted of new dielectric ceramics based Zn0,8Pb0,2TiO3 usually used as dielectric substrate. The materials produced were characterized in the microwave range. These are dielectrics with high relative permittivity and low loss tangent. The Ansoft HFSS, commercial program employee, using the finite element method, and was used for analysis of antennas studied in this work
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
This dissertation, entitled O Auto da Morte e da Vida: A escrita barroca de João Cabral de Melo Neto, has the aim of analising, interpreting, in a baroque perspective, Cabral s writing in the poem/play Morte e vida severina Auto de Natal Pernambucano, taking as basis the theories of Eugênio D´Ors, Severo Sarduy, Omar Calabrase, Lezama Lima, Afonso Ávila, Affonso Romano de Sant´Anna and others cited in the body of this work. During the analisys we feature confluences, relations, similarities, identification between the Baroque of the counter reformation and the modern Baroque or Neobaroque. We seek to comprehend the baroque which is new in the XX century and Cabral s poetry as an element of the contemporaneity, by updating the concept of the Baroque in the 1600s, when it is detected in its purest characteristic in human relation (the life of the Northwestern brazilian) through an intangible reality (the death). The Baroque as a cultural summary of a period of instability and transformation, with the power of dismantling an already established poetry. The fight between words and things, language and reality
Resumo:
This study approach the Jorge Luis Borges s prose of fiction under the perspective of mimesis and the self-reflexivity. The hypothesis is that the Aleph is a central symbol of the Borges s fictional universe. The rewriting and the retake of this symbol along of his work entail to a reflection about the possibilities and the limits of mimesis. This study is divided in three parts which contain two chapters. The first part Bibliographic revision and conceptual fundaments of inquiry discuss the critical fortune of author (Chapter 1) and the concepts that will give sustentation to the inquiry (Chapter 2). The second part About the Borges s aesthetic project sketch out the literary project defended by Borges that is his conception of the literature and his ideological matrix (Chapter 3) beside his anti-psychologism and his nostalgia of epos (Chapter 4). The third and last part is entitled The Aleph and his doubles. In the chapter 5 this study analyses the short story El Aleph and consider its centrality on the Borges s work. The argument that is on this short story Borges elaborates a reflection about mimesis. In the chapter 6, on the same hand, four short stories will be analysed: Funes el memorioso ; El Libro de Arena ; El evangelio según Marcos and Del rigor en la ciencia . The conclusion that is the Borges s literature is self-awake of its process as such demonstrate its parodic sense and its bookish origin. Hence, the Borges s literature overlapping the mimetic crisis of language and challenge the limits between fiction and reality. However, it doesn t surrender to the nihilist perspective that is closing of literature to the world
Resumo:
This work is a detailed study of self-similar models for the expansion of extragalactic radio sources. A review is made of the definitions of AGN, the unified model is discussed and the main characteristics of double radio sources are examined. Three classification schemes are outlined and the self-similar models found in the literature are studied in detail. A self-similar model is proposed that represents a generalization of the models found in the literature. In this model, the area of the head of the jet varies with the size of the jet with a power law with an exponent γ. The atmosphere has a variable density that may or may not be spherically symmetric and it is taken into account the time variation of the cinematic luminosity of the jet according to a power law with an exponent h. It is possible to show that models Type I, II and III are particular cases of the general model and one also discusses the evolution of the sources radio luminosity. One compares the evolutionary curves of the general model with the particular cases and with the observational data in a P-D diagram. The results show that the model allows a better agreement with the observations depending on the appropriate choice of the model parameters.