3 resultados para PROBLEMAS DE VALORES DE FRONTERA

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to analyze the strategies used by families living in at-risk-and-vulnerable situations registered with the Estratégia Saúde da Família (ESF) ( Family Health Strategy ) as they face their daily problems. This is an investigation of a qualitative nature, using interview as the main tool for an empirical approach. Ten women from the Panatis location in northern Natal, Rio Grande do Norte, whose families live in precarious social-economical situations were interviewed. The interviews occurred between the months of April and June, 2007. The reports revealed that a mixture of improvisations and creativity was used as strategies for overcoming the privations and necessities of daily life. We also reached the conclusion that these families sought solutions for their problems through religiosity and a gift reciprocity system as resources for obtaining personal recognition and support in adversity. The results, in addition, point to ESF as one of the strategies used by these families in the search for attention and care. From this perspective, ESF has proven to be a place for listening and the construction of ties that are consolidated through home visits, organized groups, in parties and outings that are promoted in the community, reestablishing contact and support among people and signaling a way out of abandonment and isolation. Holders of knowledge constructed through life experiences, the participants of the study led us to induce and infer the need to amplify space that will allow them to express meanings, values and experiences, and consider that becoming ill is a process that incorporates dimensions of life that go beyond the physical. As health professionals, we need to be aware of the multiple and creative abilities used in the daily lives of these families, so that we can, along with them, reinvent a new way of dealing with health

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new paradigm for collective learning in multi-agent systems (MAS) as a solution to the problem in which several agents acting over the same environment must learn how to perform tasks, simultaneously, based on feedbacks given by each one of the other agents. We introduce the proposed paradigm in the form of a reinforcement learning algorithm, nominating it as reinforcement learning with influence values. While learning by rewards, each agent evaluates the relation between the current state and/or action executed at this state (actual believe) together with the reward obtained after all agents that are interacting perform their actions. The reward is a result of the interference of others. The agent considers the opinions of all its colleagues in order to attempt to change the values of its states and/or actions. The idea is that the system, as a whole, must reach an equilibrium, where all agents get satisfied with the obtained results. This means that the values of the state/actions pairs match the reward obtained by each agent. This dynamical way of setting the values for states and/or actions makes this new reinforcement learning paradigm the first to include, naturally, the fact that the presence of other agents in the environment turns it a dynamical model. As a direct result, we implicitly include the internal state, the actions and the rewards obtained by all the other agents in the internal state of each agent. This makes our proposal the first complete solution to the conceptual problem that rises when applying reinforcement learning in multi-agent systems, which is caused by the difference existent between the environment and agent models. With basis on the proposed model, we create the IVQ-learning algorithm that is exhaustive tested in repetitive games with two, three and four agents and in stochastic games that need cooperation and in games that need collaboration. This algorithm shows to be a good option for obtaining solutions that guarantee convergence to the Nash optimum equilibrium in cooperative problems. Experiments performed clear shows that the proposed paradigm is theoretical and experimentally superior to the traditional approaches. Yet, with the creation of this new paradigm the set of reinforcement learning applications in MAS grows up. That is, besides the possibility of applying the algorithm in traditional learning problems in MAS, as for example coordination of tasks in multi-robot systems, it is possible to apply reinforcement learning in problems that are essentially collaborative

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pattern classification is one of the machine learning subareas that has the most outstanding. Among the various approaches to solve pattern classification problems, the Support Vector Machines (SVM) receive great emphasis, due to its ease of use and good generalization performance. The Least Squares formulation of SVM (LS-SVM) finds the solution by solving a set of linear equations instead of quadratic programming implemented in SVM. The LS-SVMs provide some free parameters that have to be correctly chosen to achieve satisfactory results in a given task. Despite the LS-SVMs having high performance, lots of tools have been developed to improve them, mainly the development of new classifying methods and the employment of ensembles, in other words, a combination of several classifiers. In this work, our proposal is to use an ensemble and a Genetic Algorithm (GA), search algorithm based on the evolution of species, to enhance the LSSVM classification. In the construction of this ensemble, we use a random selection of attributes of the original problem, which it splits the original problem into smaller ones where each classifier will act. So, we apply a genetic algorithm to find effective values of the LS-SVM parameters and also to find a weight vector, measuring the importance of each machine in the final classification. Finally, the final classification is obtained by a linear combination of the decision values of the LS-SVMs with the weight vector. We used several classification problems, taken as benchmarks to evaluate the performance of the algorithm and compared the results with other classifiers