804 resultados para Computational learning theory
Resumo:
Sistemas de previsão de cheias podem ser adequadamente utilizados quando o alcance é suficiente, em comparação com o tempo necessário para ações preventivas ou corretivas. Além disso, são fundamentalmente importantes a confiabilidade e a precisão das previsões. Previsões de níveis de inundação são sempre aproximações, e intervalos de confiança não são sempre aplicáveis, especialmente com graus de incerteza altos, o que produz intervalos de confiança muito grandes. Estes intervalos são problemáticos, em presença de níveis fluviais muito altos ou muito baixos. Neste estudo, previsões de níveis de cheia são efetuadas, tanto na forma numérica tradicional quanto na forma de categorias, para as quais utiliza-se um sistema especialista baseado em regras e inferências difusas. Metodologias e procedimentos computacionais para aprendizado, simulação e consulta são idealizados, e então desenvolvidos sob forma de um aplicativo (SELF – Sistema Especialista com uso de Lógica “Fuzzy”), com objetivo de pesquisa e operação. As comparações, com base nos aspectos de utilização para a previsão, de sistemas especialistas difusos e modelos empíricos lineares, revelam forte analogia, apesar das diferenças teóricas fundamentais existentes. As metodologias são aplicadas para previsão na bacia do rio Camaquã (15543 km2), para alcances entre 10 e 48 horas. Dificuldades práticas à aplicação são identificadas, resultando em soluções as quais constituem-se em avanços do conhecimento e da técnica. Previsões, tanto na forma numérica quanto categorizada são executadas com sucesso, com uso dos novos recursos. As avaliações e comparações das previsões são feitas utilizandose um novo grupo de estatísticas, derivadas das freqüências simultâneas de ocorrência de valores observados e preditos na mesma categoria, durante a simulação. Os efeitos da variação da densidade da rede são analisados, verificando-se que sistemas de previsão pluvio-hidrométrica em tempo atual são possíveis, mesmo com pequeno número de postos de aquisição de dados de chuva, para previsões sob forma de categorias difusas.
Resumo:
The proposed research aims at consolidating two years of practical experience in developing a classroom experiential learning pedagogic approach for the problem structuring methods (PSMs) of operational research. The results will be prepared as papers to be submitted, respectively, to the Brazilian ISSS-sponsored system theory conference in São Paulo, and to JORS. These two papers follow the submission (in 2004) of one related paper to JORS which is about to be resubmitted following certain revisions. This first paper draws from the PSM and experiential learning literatures in order to introduce a basic foundation upon which a pedagogic framework for experiential learning of PSMs may be built. It forms, in other words, an integral part of my research in this area. By September, the area of pedagogic approaches to PSM learning will have received its first official attention - at the UK OR Society conference. My research and paper production during July-December, therefore, coincide with an important time in this area, enabling me to form part of the small cohort of published researchers creating the foundations upon which future pedagogic research will build. On the institutional level, such pioneering work also raises the national and international profile of FGVEAESP, making it a reference for future researchers in this area.
Resumo:
Nesse artigo, eu desenvolvo e analiso um modelo de dois perí odos em que dois polí ticos competem pela preferência de um eleitor representativo, que sabe quão benevolente é um dos polí ticos mas é imperfeitamente informado sobre quão benevolente é o segundo polí tico. O polí tico conhecido é interpretado como um incumbente de longo prazo, ao passo que o polí tico desconhecido é interpretado como um desa fiante menos conhecido. É estabelecido que o mecanismo de provisão de incentivos inerente às elei cões - que surge através da possibilidade de não reeleger um incumbente - e considerações acerca de aquisi cão de informa cão por parte do eleitor se combinam de modo a determinar que em qualquer equilí brio desse jogo o eleitor escolhe o polí tico desconhecido no per íodo inicial do modelo - uma a cão à qual me refi ro como experimenta cão -, fornecendo assim uma racionaliza cão para a não reelei cão de incumbentes longevos. Especifi camente, eu mostro que a decisão do eleitor quanto a quem eleger no per odo inicial se reduz à compara cão entre os benefí cios informacionais de escolher o polí tico desconhecido e as perdas econômicas de fazê-lo. Os primeiros, que capturam as considera cões relacionadas à aquisi cão de informa cão, são mostrados serem sempre positivos, ao passo que as últimas, que capturam o incentivo à boa performance, são sempre não-negativas, implicando que é sempre ótimo para o eleitor escolher o polí tico desconhecido no per íodo inicial.
Resumo:
The present study aims to investigate the constructs of Technological Readiness Index (TRI) and the Expectancy Disconfirmation Theory (EDT) as determinants of satisfaction and continuance intention use in e-learning services. Is proposed a theoretical model that seeks to measure the phenomenon suited to the needs of public organizations that offer distance learning course with the use of virtual platforms for employees. The research was conducted from a quantitative analytical approach, via online survey in a sample of 343 employees of 2 public organizations in RN who have had e-learning experience. The strategy of data analysis used multivariate analysis techniques, including structural equation modeling (SEM), operationalized by AMOS© software. The results showed that quality, quality disconfirmation, value and value disconfirmation positively impact on satisfaction, as well as disconfirmation usability, innovativeness and optimism. Likewise, satisfaction proved to be decisive for the purpose of continuance intention use. In addition, technological readiness and performance are strongly related. Based on the structural model found by the study, public organizations can implement e-learning services for employees focusing on improving learning and improving skills practiced in the organizational environment
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm
Resumo:
The Support Vector Machines (SVM) has attracted increasing attention in machine learning area, particularly on classification and patterns recognition. However, in some cases it is not easy to determinate accurately the class which given pattern belongs. This thesis involves the construction of a intervalar pattern classifier using SVM in association with intervalar theory, in order to model the separation of a pattern set between distinct classes with precision, aiming to obtain an optimized separation capable to treat imprecisions contained in the initial data and generated during the computational processing. The SVM is a linear machine. In order to allow it to solve real-world problems (usually nonlinear problems), it is necessary to treat the pattern set, know as input set, transforming from nonlinear nature to linear problem. The kernel machines are responsible to do this mapping. To create the intervalar extension of SVM, both for linear and nonlinear problems, it was necessary define intervalar kernel and the Mercer s theorem (which caracterize a kernel function) to intervalar function
Resumo:
Following the new tendency of interdisciplinarity of modern science, a new field called neuroengineering has come to light in the last decades. After 2000, scientific journals and conferences all around the world have been created on this theme. The present work comprises three different subareas related to neuroengineering and electrical engineering: neural stimulation; theoretical and computational neuroscience; and neuronal signal processing; as well as biomedical engineering. The research can be divided in three parts: (i) A new method of neuronal photostimulation was developed based on the use of caged compounds. Using the inhibitory neurotransmitter GABA caged by a ruthenium complex it was possible to block neuronal population activity using a laser pulse. The obtained results were evaluated by Wavelet analysis and tested by non-parametric statistics. (ii) A mathematical method was created to identify neuronal assemblies. Neuronal assemblies were proposed as the basis of learning by Donald Hebb remain the most accepted theory for neuronal representation of external stimuli. Using the Marcenko-Pastur law of eigenvalue distribution it was possible to detect neuronal assemblies and to compute their activity with high temporal resolution. The application of the method in real electrophysiological data revealed that neurons from the neocortex and hippocampus can be part of the same assembly, and that neurons can participate in multiple assemblies. (iii) A new method of automatic classification of heart beats was developed, which does not rely on a data base for training and is not specialized in specific pathologies. The method is based on Wavelet decomposition and normality measures of random variables. Throughout, the results presented in the three fields of knowledge represent qualification in neural and biomedical engineering
Resumo:
Research in the area of teacher training in English as a Foreign Language (CELANI, 2003, 2004, 2010; PAIVA, 2000, 2003, 2005; VIEIRA-ABRAHÃO, 2010) articulates the complexity of beginning teachers classroom contexts aligned with teaching language as a social and professional practice of the teacher in training. To better understand this relationship, the present study is based on a corpus of transcribed interviews and questionnaires applied to 28 undergraduate students majoring in Letters/English emphasis, at a public university located in the interior of the Western Amazon region, soliciting their opinions about the reforms made in the curriculum of this Major. Interviews and questionnaires were used as data collection instruments to trace a profile of the students organized in Group 1, with freshmen and sophomore undergraduates who are following the 2009 curriculum, and Group 2, with junior and senior undergraduates who are following the 2006 curriculum. The objectives are to identify, to characterize and to analyze the types of pronouns, roles and social actors represented in the opinions of these students in relation to their teacher training curriculum. The theoretical support focuses on the challenge of historical and contemporary routes from English teachers initial education programs (MAGALHÃES; LIBERALLI, 2009; PAVAN; SILVA, 2010; ALVAREZ, 2010; VIANA, 2011; PAVAN, 2012). Our theoretical perspective is based on the Systemic Functional Grammar of Halliday (1994), Halliday and Hasan (1989), Halliday and Matthiessen (2004), Eggins (1994; 2004) and Thompson (2004). We focus on the concept of the Interpersonal meaning, specifically regarding the roles articulated in the studies by Delu (1991), Thompson and Thetela (1995), and in the Portuguese language such as Ramos (1997), Silva (2006) and Cabral (2009). Moreover, we ascribe van Leeuwen s (1997; 2003) theory of Representation of Social Actors as a theoretical framework in order to identify the sociological aspect of social actors represented in the students discourse. Within this scenario, the analysis unfolds on three levels: grammatical (pronouns), semantic (roles), and discursive (social actors). For the analysis of interpersonal realizations present in the students opinions, we use the computational program WordSmith Tools (SCOTT, 2010) and its applications Wordlist and Concord to quantify the occurrences of the pronouns I, You and They, which characterize the roles and social actors of the corpus. The results show that the students assigned the following roles to themselves: (i) apprentice to express their initial process of English language learning; (ii) freshman to reveal their choice of Major in Letters/English emphasis; (iii) future teacher to relate their expectations towards a practicing professional. To assign the roles to professors in the major, the students used the metaphor of modality (I think) to indicate the relationship of teacher training, while they are in the role of a student and as a future teacher. From these evidences the representation of the students as social actors emerges in roles such as: (i) active roles; (ii) passive roles and (iii) personalized roles. The social actors represented in the opinions of the students reflect the inclusion of these roles assigned to the actions expressed about their experiences and expectations derived from their teacher training classroom
Resumo:
This work presents a methodology to analyze electric power systems transient stability for first swing using a neural network based on adaptive resonance theory (ART) architecture, called Euclidean ARTMAP neural network. The ART architectures present plasticity and stability characteristics, which are very important for the training and to execute the analysis in a fast way. The Euclidean ARTMAP version provides more accurate and faster solutions, when compared to the fuzzy ARTMAP configuration. Three steps are necessary for the network working, training, analysis and continuous training. The training step requires much effort (processing) while the analysis is effectuated almost without computational effort. The proposed network allows approaching several topologies of the electric system at the same time; therefore it is an alternative for real time transient stability of electric power systems. To illustrate the proposed neural network an application is presented for a multi-machine electric power systems composed of 10 synchronous machines, 45 buses and 73 transmission lines. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Logic courses represent a pedagogical challenge and the recorded number of cases of failures and of discontinuity in them is often high. Amont other difficulties, students face a cognitive overload to understand logical concepts in a relevant way. On that track, computational tools for learning are resources that help both in alleviating the cognitive overload scenarios and in allowing for the practical experimenting with theoretical concepts. The present study proposes an interactive tutorial, namely the TryLogic, aimed at teaching to solve logical conjectures either by proofs or refutations. The tool was developed from the architecture of the tool TryOcaml, through support of the communication of the web interface ProofWeb in accessing the proof assistant Coq. The goals of TryLogic are: (1) presenting a set of lessons for applying heuristic strategies in solving problems set in Propositional Logic; (2) stepwise organizing the exposition of concepts related to Natural Deduction and to Propositional Semantics in sequential steps; (3) providing interactive tasks to the students. The present study also aims at: presenting our implementation of a formal system for refutation; describing the integration of our infrastructure with the Virtual Learning Environment Moodle through the IMS Learning Tools Interoperability specification; presenting the Conjecture Generator that works for the tasks involving proving and refuting; and, finally to evaluate the learning experience of Logic students through the application of the conjecture solving task associated to the use of the TryLogic
Resumo:
Background: The genome-wide identification of both morbid genes, i.e., those genes whose mutations cause hereditary human diseases, and druggable genes, i.e., genes coding for proteins whose modulation by small molecules elicits phenotypic effects, requires experimental approaches that are time-consuming and laborious. Thus, a computational approach which could accurately predict such genes on a genome-wide scale would be invaluable for accelerating the pace of discovery of causal relationships between genes and diseases as well as the determination of druggability of gene products.Results: In this paper we propose a machine learning-based computational approach to predict morbid and druggable genes on a genome-wide scale. For this purpose, we constructed a decision tree-based meta-classifier and trained it on datasets containing, for each morbid and druggable gene, network topological features, tissue expression profile and subcellular localization data as learning attributes. This meta-classifier correctly recovered 65% of known morbid genes with a precision of 66% and correctly recovered 78% of known druggable genes with a precision of 75%. It was than used to assign morbidity and druggability scores to genes not known to be morbid and druggable and we showed a good match between these scores and literature data. Finally, we generated decision trees by training the J48 algorithm on the morbidity and druggability datasets to discover cellular rules for morbidity and druggability and, among the rules, we found that the number of regulating transcription factors and plasma membrane localization are the most important factors to morbidity and druggability, respectively.Conclusions: We were able to demonstrate that network topological features along with tissue expression profile and subcellular localization can reliably predict human morbid and druggable genes on a genome-wide scale. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing morbidity and druggability.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this work, the plate bending formulation of the boundary element method (BEM) based on the Reissner's hypothesis is extended to the analysis of zoned plates in order to model a building floor structure. In the proposed formulation each sub-region defines a beam or a slab and depending on the way the sub-regions are represented, one can have two different types of analysis. In the simple bending problem all sub-regions are defined by their middle surface. on the other hand, for the coupled stretching-bending problem all sub-regions are referred to a chosen reference surface, therefore eccentricity effects are taken into account. Equilibrium and compatibility conditions are automatically imposed by the integral equations, which treat this composed structure as a single body. The bending and stretching values defined on the interfaces are approximated along the beam width, reducing therefore the number of degrees of freedom. Then, in the proposed model the set of equations is written in terms of the problem values on the beam axis and on the external boundary without beams. Finally some numerical examples are presented to show the accuracy of the proposed model.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)