900 resultados para statistical relational learning
Resumo:
In the present work, we propose a model for the statistical distribution of people versus number of steps acquired by them in a learning process, based on competition, learning and natural selection. We consider that learning ability is normally distributed. We found that the number of people versus step acquired by them in a learning process is given through a power law. As competition, learning and selection is also at the core of all economical and social systems, we consider that power-law scaling is a quantitative description of this process in social systems. This gives an alternative thinking in holistic properties of complex systems. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Structural health monitoring (SHM) is related to the ability of monitoring the state and deciding the level of damage or deterioration within aerospace, civil and mechanical systems. In this sense, this paper deals with the application of a two-step auto-regressive and auto-regressive with exogenous inputs (AR-ARX) model for linear prediction of damage diagnosis in structural systems. This damage detection algorithm is based on the. monitoring of residual error as damage-sensitive indexes, obtained through vibration response measurements. In complex structures there are. many positions under observation and a large amount of data to be handed, making difficult the visualization of the signals. This paper also investigates data compression by using principal component analysis. In order to establish a threshold value, a fuzzy c-means clustering is taken to quantify the damage-sensitive index in an unsupervised learning mode. Tests are made in a benchmark problem, as proposed by IASC-ASCE with different damage patterns. The diagnosis that was obtained showed high correlation with the actual integrity state of the structure. Copyright © 2007 by ABCM.
Resumo:
Given that the total amount of losses in a distribution system is known, with a reliable methodology for the technical loss calculation, the non-technical losses can be obtained by subtraction. A usual method of calculation technical losses in the electric utilities uses two important factors: load factor and the loss factor. The load factor is usually obtained with energy and demand measurements, whereas, to compute the loss factor it is necessary the learning of demand and energy loss, which are not, in general, prone of direct measurements. In this work, a statistical analysis of this relationship using the curves of a sampling of consumers in a specific company is presented. These curves will be summarized in different bands of coefficient k. Then, it will be possible determine where each group of consumer has its major concentration of points. ©2008 IEEE.
Resumo:
This paper presents the analysis and evaluation of the Power Electronics course at So Paulo State University-UNESP-Campus of Ilha Solteira(SP)-Brazil, which includes the usage of interactive Java simulations tools and an educational software to aid the teaching of power electronic converters. This platform serves as an oriented course for the lectures and supplementary support for laboratory experiments in the power electronics courses. The simulation tools provide an interactive and dynamic way to visualize the power electronics converters behavior together with the educational software, which contemplates the theory and a list of subjects for circuit simulations. In order to verify the performance and the effectiveness of the proposed interactive educational platform, it is presented a statistical analysis considering the last three years. © 2011 IEEE.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
O recente crecemento significativo do ensino de español no Brasil e a consecuente necesidade de formar profesores desta lingua ven demandando das institucións de Ensino Superior un coidado especial no proceso de formación de futuros docentes. Así, o obxecto de estudo desta tese céntrase na docencia, en súas múltiples dimensións pedagóxicas. O obxectivo xeral é o de analizar, segundo a perspectiva dos autores referenciados, as implicacións do traballo docente no desenvolvemento do proceso de ensino-aprendizaxe nas disciplinas que tratan especificamente da Lingua e Cultura Hispanófona nos Graos en Letras-Español en dúas universidades Ŕ a Universidade Federal do Pará e a Universidade da Amazônia, situadas na cidade de Belém, no estado do Pará (Brasil). Derivadas deste obxectivo formuláronse preguntas, dirixidas a profesores e alumnos, que orientaron o desenvolvemento do estudo. A opción metodolóxica adoptada ten como fundamento os principios da abordaxe cuanticualitativa, considerando os presupostos do paradigma fenomenolóxico, con procedementos descriptivoanalíticos, na perspectiva da triangulación de métodos e suxeitos como unha forma de integrar os diferentes aspectos do estudo. Participaron como suxeitos da investigación profesores e alumnos das dúas institucións. Os dados foron recollidos vía análise documental, cuestionarios con preguntas abertas e pechadas e entrevistas semiestruturadas. As respostas ás preguntas pechadas dos cuestionarios foron traballadas con procedementos estatísticos descritivos e as respostas ás preguntas abertas e as entrevistas foron analizadas con procedementos específicos da análise de contido. Os resultados revelaron que as prácticas docentes presentan algúns puntos críticos, centrados principalmente na planificación, no proceso avaliador, na infraestrutura e no clima relacional. Sen embargo, estas dificultades non se constitúen en obstáculos insuperables, incluso porque outros aspectos foron valorados positivamente por profesores e alumnos, principalmente os relacionados co traballo co contido, os procedementos metodolóxicos e o uso dos recursos didácticos. De maneira xeral, existe unha manifesta opinión positiva en relación á práctica docente dos profesores das dúas institucións. Da análise destes resultados derívanse algunhas recomendacións que poderán contribuír para unha reflexión conxunta da comunidade implicada con vistas ó perfeccionamento da práctica docente nos Grados de Licenciatura en Letras-Español en Brasil.
Resumo:
This study aimed to verify the effects of a metatextual intervention program, in the elaboration of stories written by students with learning difficulties. Four students were included in the sample of both genders, with ages ranging between eight years and four months and ten years and two months of age. The program was implemented at the participant schools, using an approach of multiple baseline within-subjects, with two conditions: baseline and intervention. Data analysis was based on the classification of stories produced by the students. Mann-Whitney testing was also applied, to analyze whether there have been significant changes in these productions. The results indicated that all students have improved performance in relation to the categories of produced stories, from elementary schemas (33%), for a more elaborate scheme (77%), with a better structuring of the elements that constitute a story. Statistical analysis also showed that the intervention has produced significant results for all variables analyzed. The data obtained have shown that the program was effective.
Resumo:
Objective: Is it feasible to learn the basics of wet mount microscopy of vaginal fluid in 10 hours?Materials and Methods: This is a pilot project wherein 6 students with different grades of education were invited for being tested on their ability to read wet mount microscopic slides before and after 10 hours of hands-on training. Microscopy was performed according to a standard protocol (Femicare, Tienen, Belgium). Before and after training, all students had to evaluate a different set of 50 digital slides. Different diagnoses and microscopic patterns had to be scored. kappa indices were calculated compared with the expert reading. Results: All readers improved their mean scores significantly, especially for the most important types of altered flora (p < .0001). The mean increase in reading concordance (kappa from 0.64 to 0.75) of 1 student with a solid previous experience with microscopy did not reach statistical significance, but the remaining 5 students all improved their scores from poor performance (all kappa < 0.20) to moderate (kappa = 0.53, n = 1) to good (kappa > 0.61, n = 4) concordance. Reading quality improved and reached fair to good concordance on all microscopic items studied, except for the detection of parabasal cells and cytolytic flora. Conclusions: Although further improvement is still possible, a short training course of 10 hours enables vast improvement on wet mount microscopy accuracy and results in fair to good concordance of the most important variables of the vaginal flora compared to a reference reader.
Resumo:
The thesis of this paper is based on the assumption that the socio-economic system in which we are living is characterised by three great trends: growing attention to the promotion of human capital; extremely rapid technological progress, based above all on the information and communication technologies (ICT); the establishment of new production and organizational set-ups. These transformation processes pose a concrete challenge to the training sector, which is called to satisfy the demand for new skills that need to be developed and disseminated. Hence the growing interest that the various training sub-systems devote to the issues of lifelong learning and distance learning. In such a context, the so-called e-learning acquires a central role. The first chapter proposes a reference theoretical framework for the transformations that are shaping post-industrial society. It analyzes some key issues such as: how work is changing, the evolution of organizational set-ups and the introduction of learning organization, the advent of the knowledge society and of knowledge companies, the innovation of training processes, and the key role of ICT in the new training and learning systems. The second chapter focuses on the topic of e-learning as an effective training model in response to the need for constant learning that is emerging in the knowledge society. This chapter starts with a reflection on the importance of lifelong learning and introduces the key arguments of this thesis, i.e. distance learning (DL) and the didactic methodology called e-learning. It goes on with an analysis of the various theoretic and technical aspects of e-learning. In particular, it delves into the theme of e-learning as an integrated and constant training environment, characterized by customized programmes and collaborative learning, didactic assistance and constant monitoring of the results. Thus, all the aspects of e-learning are taken into exam: the actors and the new professionals, the virtual communities as learning subjects, the organization of contents in learning objects, the conformity to international standards, the integrated platforms and so on. The third chapter, which concludes the theoretic-interpretative part, starts with a short presentation of the state-of-the-art e-learning international market that aims to understand its peculiarities and its current trends. Finally, we focus on some important regulation aspects related to the strong impulse given by the European Commission first, and by the Italian governments secondly, to the development and diffusion of e-learning. The second part of the thesis (chapters 4, 5 and 6) focus on field research, which aims to define the Italian scenario for e-learning. In particular, we have examined some key topics such as: the challenges of training and the instruments to face such challenges; the new didactic methods and technologies for lifelong learning; the level of diffusion of e-learning in Italy; the relation between classroom training and online training; the main factors of success as well as the most critical aspects of the introduction of e-learning in the various learning environments. As far as the methodological aspects are concerned, we have favoured a qualitative and quantitative analysis. A background analysis has been done to collect the statistical data available on this topic, as well as the research previously carried out in this area. The main source of data is constituted by the results of the Observatory on e-learning of Aitech-Assinform, which covers the 2000s and four areas of implementation (firms, public administration, universities, school): the thesis has reviewed the results of the last three available surveys, offering a comparative interpretation of them. We have then carried out an in-depth empirical examination of two case studies, which have been selected by virtue of the excellence they have achieved and can therefore be considered advanced and emblematic experiences (a large firm and a Graduate School).
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
The present work is aimed to the study and the analysis of the defects detected in the civil structure and that are object of civil litigation in order to create an instruments capable of helping the different actor involved in the building process. It is divided in three main sections. The first part is focused on the collection of the data related to the civil proceeding of the 2012 and the development of in depth analysis of the main aspects regarding the defects on existing buildings. The research center “Osservatorio Claudio Ceccoli” developed a system for the collection of the information coming from the civil proceedings of the Court of Bologna. Statistical analysis are been performed and the results are been shown and discussed in the first chapters.The second part analyzes the main issues emerged during the study of the real cases, related to the activities of the technical consultant. The idea is to create documents, called “focus”, addressed to clarify and codify specific problems in order to develop guidelines that help the technician editing of the technical advice.The third part is centered on the estimation of the methods used for the collection of data. The first results show that these are not efficient. The critical analysis of the database, the result and the experience and throughout, allowed the implementation of the collection system for the data.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.