71 resultados para crittografia aes algoritmo Rijndael cifrazione decifrazione
Resumo:
This paper presents an evaluative study about the effects of using a machine learning technique on the main features of a self-organizing and multiobjective genetic algorithm (GA). A typical GA can be seen as a search technique which is usually applied in problems involving no polynomial complexity. Originally, these algorithms were designed to create methods that seek acceptable solutions to problems where the global optimum is inaccessible or difficult to obtain. At first, the GAs considered only one evaluation function and a single objective optimization. Today, however, implementations that consider several optimization objectives simultaneously (multiobjective algorithms) are common, besides allowing the change of many components of the algorithm dynamically (self-organizing algorithms). At the same time, they are also common combinations of GAs with machine learning techniques to improve some of its characteristics of performance and use. In this work, a GA with a machine learning technique was analyzed and applied in a antenna design. We used a variant of bicubic interpolation technique, called 2D Spline, as machine learning technique to estimate the behavior of a dynamic fitness function, based on the knowledge obtained from a set of laboratory experiments. This fitness function is also called evaluation function and, it is responsible for determining the fitness degree of a candidate solution (individual), in relation to others in the same population. The algorithm can be applied in many areas, including in the field of telecommunications, as projects of antennas and frequency selective surfaces. In this particular work, the presented algorithm was developed to optimize the design of a microstrip antenna, usually used in wireless communication systems for application in Ultra-Wideband (UWB). The algorithm allowed the optimization of two variables of geometry antenna - the length (Ls) and width (Ws) a slit in the ground plane with respect to three objectives: radiated signal bandwidth, return loss and central frequency deviation. These two dimensions (Ws and Ls) are used as variables in three different interpolation functions, one Spline for each optimization objective, to compose a multiobjective and aggregate fitness function. The final result proposed by the algorithm was compared with the simulation program result and the measured result of a physical prototype of the antenna built in the laboratory. In the present study, the algorithm was analyzed with respect to their success degree in relation to four important characteristics of a self-organizing multiobjective GA: performance, flexibility, scalability and accuracy. At the end of the study, it was observed a time increase in algorithm execution in comparison to a common GA, due to the time required for the machine learning process. On the plus side, we notice a sensitive gain with respect to flexibility and accuracy of results, and a prosperous path that indicates directions to the algorithm to allow the optimization problems with "η" variables
Resumo:
This work proposes a collaborative system for marking dangerous points in the transport routes and generation of alerts to drivers. It consisted of a proximity warning system for a danger point that is fed by the driver via a mobile device equipped with GPS. The system will consolidate data provided by several different drivers and generate a set of points common to be used in the warning system. Although the application is designed to protect drivers, the data generated by it can serve as inputs for the responsible to improve signage and recovery of public roads
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Resumo:
Youngsters and teenagers are still a very vulnerable group of DST/AIDS. In order to combat this vulnerability the community intervention project being developed in Mãe Luiza neighborhood in the city of Natal-RN, entitled Strengthening Community Action Network for Prevention in HIV/AIDS: knowledge and Intervene emerged, popularly known as Project Viva Mãe Luiza. The project develops workshops of educomunication whose approach involves the DST/AIDS subject with the following media: video, photography, and theater playbook. This research integrates the activities of the project and has as main objective to investigate how strategies and practices of media communication developed in Project Viva Mãe Luiza through workshops of educomunication, assisted learning for the prevention of DST/AIDS and contributed to the reduction of vulnerability to DST/AIDS among adolescents and young participants of the project residents of Mãe Luiza community. The methodological basis was based on intervention research, with the technique of gathering daily field data, literature and documentary, in-depth interviews and ethnographic observation. The qualitative analysis was based on the monitoring of video workshops, photography, theater and primer, respectively, crossed by transverse to the prevention of DST/AIDS, conducted between June 2012 and December 2013 issues. Interviews with eight multipliers, aiming to understand their perceptions of vulnerability, prevention, multiplication and use of media that were part of the project were conducted. The analyzes show that learning workshops educomunication community health repercussions both in the development of individual skills in communication as changing perceptions about the vulnerabilities to which they are exposed, the awareness about prevention at the individual and differentiated actions multiplication in the community
Resumo:
The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done
Resumo:
This research evaluated the contribution of the Support Center for Family Health (SCFH) in relation to its effect on the Family Health Units through perception of the Family Health Strategy (FHS) and SCFH professionals, in addition to the satisfaction of users in relation to that role. Data were collected in the public health services of the city of Macaíba-RN in 2012, through semi-structured questionnaire and non participant systematic observation and it counted as investigated subjects 272 individuals (60 FHS professionals, 12 SCFH professionals and 200 users representatives of 20 units). For analysis of the responses of the opened questions was used categorization process and, in relation to the observational method, that was based on checking the space organization, the characteristics of the participating subjects and the specific set of activities performed by SCFH teams through an observation guide. The results point to a good acceptance of the SCFH teams role by most FHS professionals who reported active participation in the health units routine, its integration activities to the FHS teams with resolving based health promotion actions. Regarding the SCFH professionals, they also reported positive contribution by participating actively in the units routine with integrated activities to the ESF teams and developing resolute actions. For users, the SCFH brought assurance services with better access to specialized, resolving and welcoming care. Systematic observation ratified data obtained by questionnaire. It was perceived the need to implement actions related to man`s health, to invest in expanding the number of the SCFH teams, the greater supply of medications, improving the regulatory process and planning together as a key strategy to promote a more effective integration between teams SCFH / FHS
Resumo:
Popular Health Education in its emancipatory dimension refers to individuals and groups to exchange knowledge and experiences, allowing them to associate health to the outcomes of their living conditions. Under this view, health workers and health users are subjects of the educative process. Thus, this study aims to identify the key clinical and socio sanitary attributes and promote educational activities with patients with Diabetes Mellitus (DM) in a Family Health Care Unit of the Western Sanitary District, in the city of Natal / RN. It is an action research which uses the references of the Theory of Liberating Education, which is based on a problem-solving pedagogy and that values dialogue in the process of understanding oneself and the world. Thirty-six diabetics, who are residents of the area covered by the health care unit, and thirty health workers participated in the survey. Each group had an average of twelve participants, and the meetings took place at the Unit´s hall, using conversation wheels, group dynamics, life narratives, experiences telling, movie exhibition and discussions, music, knowledge telling, desires, limitations, beliefs and values socially constructed. Data collection took place during the second half of two thousand and thirteen through Free Word Association Technique (FWAT), recordings of conversation wheels, participative observation, group dynamics, testimonies, questionnaires, life narratives and photographs. The empirical material was organized and subjected to three analyzes: thematic content (Bardin), textual statistics analysis by software IRAMUTEQ (Ratinaud), and photographic analysis (Edmund Feldman). The data analyses originated words, expressions, categories, themes and creative situations showing that popular health education is in process of construction, but still very incipient in primary care. The National Policy on Popular Health Education shows us the necessary ways for the transformation of health practices and the build of a more shared and solidary society. The meetings could be place to reverse that normative logic that has been happening over the years in primary care, but that by itself is not enough. It is possible to conclude that the use of active practices, increasing of listening and training on Popular Health Education will enable changes in the scenario where users and health workers deal with diabetes mellitus. Thus we see the popular health education is being timidly incorporated to the educational process of the subjects involved in this study, and far away from the principles of participation, organization of political work, increase opportunities for dialogue, respect, solidarity and tolerance among different actors involved in addressing the health problems that are fundamental to the improvement in building healthy practices of primary care
Resumo:
This paper concerns a study on the University Extension, by reference to the research extension activities under the theme human rights and justice, developed in the period 2008 to 2010 in UFRN. To do so, it tried to learn the concepts of extension in Brazil from the 1970s until the 2000s in contemporary times. This study considered the neoliberal social context of the University, dominated by educational policies focusing on the hegemony of liberal ideas about society, reflecting the great advances of capital on the organization of workers in the last decades and intensified in the 1990s. This research was guided by two great motivations: the opportunity to apprehend a way to enforce the commitment of public institutions of higher education to the disadvantaged sections of society and what role the university extension space plays as a socially committed public university. The general aim of this study is to identify inside the university extension education what does it mean for practitioners and extension activities and what results it produces to society and to the academic training of future professional citizens in the current neoliberal context. The research has been developed from an analytical and critical approach based on quantitative and qualitative data, using observation techniques and semi-structured interviews. We sought to investigate and understand the social reality, the main object of this work, with an interest in identifying the need for a new teaching/learning process and for a new university practice, in order to effectively improve an advanced academic formation. For this, some interviews have been conducted with teachers, students and the external community involved in extension actions in the period defined by the work, i.e., from 2008 to 2010. In this stage, it was observed that the academic work of university extension is essential to civic education. It was recognized too as a privileged space where university fulfills its social commitment towards society, as long as it joins scientific and popular knowledge having in view a new science and a new social order
Resumo:
Over the years the use of application frameworks designed for the View and Controller layers of MVC architectural pattern adapted to web applications has become very popular. These frameworks are classified into Actions Oriented and Components Oriented , according to the solution strategy adopted by the tools. The choice of such strategy leads the system architecture design to acquire non-functional characteristics caused by the way the framework influences the developer to implement the system. The components reusability is one of those characteristics and plays a very important role for development activities such as system evolution and maintenance. The work of this dissertation consists to analyze of how the reusability could be influenced by the Web frameworks usage. To accomplish this, small academic management applications were developed using the latest versions of Apache Struts and JavaServer Faces frameworks, the main representatives of Java plataform Web frameworks of. For this assessment was used a software quality model that associates internal attributes, which can be measured objectively, to the characteristics in question. These attributes and metrics defined for the model were based on some work related discussed in the document
Resumo:
In the world we are constantly performing everyday actions. Two of these actions are frequent and of great importance: classify (sort by classes) and take decision. When we encounter problems with a relatively high degree of complexity, we tend to seek other opinions, usually from people who have some knowledge or even to the extent possible, are experts in the problem domain in question in order to help us in the decision-making process. Both the classification process as the process of decision making, we are guided by consideration of the characteristics involved in the specific problem. The characterization of a set of objects is part of the decision making process in general. In Machine Learning this classification happens through a learning algorithm and the characterization is applied to databases. The classification algorithms can be employed individually or by machine committees. The choice of the best methods to be used in the construction of a committee is a very arduous task. In this work, it will be investigated meta-learning techniques in selecting the best configuration parameters of homogeneous committees for applications in various classification problems. These parameters are: the base classifier, the architecture and the size of this architecture. We investigated nine types of inductors candidates for based classifier, two methods of generation of architecture and nine medium-sized groups for architecture. Dimensionality reduction techniques have been applied to metabases looking for improvement. Five classifiers methods are investigated as meta-learners in the process of choosing the best parameters of a homogeneous committee.
Resumo:
This work seeks to propose and evaluate a change to the Ant Colony Optimization based on the results of experiments performed on the problem of Selective Ride Robot (PRS, a new problem, also proposed in this paper. Four metaheuristics are implemented, GRASP, VNS and two versions of Ant Colony Optimization, and their results are analyzed by running the algorithms over 32 instances created during this work. The metaheuristics also have their results compared to an exact approach. The results show that the algorithm implemented using the GRASP metaheuristic show good results. The version of the multicolony ant colony algorithm, proposed and evaluated in this work, shows the best results
Resumo:
Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais
Resumo:
Web services are computational solutions designed according to the principles of Service Oriented Computing. Web services can be built upon pre-existing services available on the Internet by using composition languages. We propose a method to generate WS-BPEL processes from abstract specifications provided with high-level control-flow information. The proposed method allows the composition designer to concentrate on high-level specifi- cations, in order to increase productivity and generate specifications that are independent of specific web services. We consider service orchestrations, that is compositions where a central process coordinates all the operations of the application. The process of generating compositions is based on a rule rewriting algorithm, which has been extended to support basic control-flow information.We created a prototype of the extended refinement method and performed experiments over simple case studies
Resumo:
Data clustering is applied to various fields such as data mining, image processing and pattern recognition technique. Clustering algorithms splits a data set into clusters such that elements within the same cluster have a high degree of similarity, while elements belonging to different clusters have a high degree of dissimilarity. The Fuzzy C-Means Algorithm (FCM) is a fuzzy clustering algorithm most used and discussed in the literature. The performance of the FCM is strongly affected by the selection of the initial centers of the clusters. Therefore, the choice of a good set of initial cluster centers is very important for the performance of the algorithm. However, in FCM, the choice of initial centers is made randomly, making it difficult to find a good set. This paper proposes three new methods to obtain initial cluster centers, deterministically, the FCM algorithm, and can also be used in variants of the FCM. In this work these initialization methods were applied in variant ckMeans.With the proposed methods, we intend to obtain a set of initial centers which are close to the real cluster centers. With these new approaches startup if you want to reduce the number of iterations to converge these algorithms and processing time without affecting the quality of the cluster or even improve the quality in some cases. Accordingly, cluster validation indices were used to measure the quality of the clusters obtained by the modified FCM and ckMeans algorithms with the proposed initialization methods when applied to various data sets