882 resultados para reinforcement learning,cryptography,machine learning,deep learning,Deep Q-Learning (DQN),AES
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
The effectiveness and value of entrepreneurship education is much debated within academic literature. The individual’s experience is advocated as being key to shaping entrepreneurial education and design through a multiplicity of theoretical concepts. Latent, pre-nascent and nascent entrepreneurship (doing) studies within the accepted literature provide an exceptional richness in diversity of thought however, there is a paucity of research into latent entrepreneurship education. In addition, Tolman’s early work shows the existence of cases whereby a novel problem is solved without trial and error, and sees such previous learning situations and circumstances as “examples of latent learning and reasoning”, (Deutsch, 1956, pg115). Latent learning has historically been the cause of much academic debate however, Coon’s (2004, pg260) work refers to “latent (hidden) learning … (as being) … without obvious reinforcement and remains hidden until reinforcement is provided” and thus, forms the working definition for the purpose of this study.
Resumo:
International audience
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
In this thesis, we propose to infer pixel-level labelling in video by utilising only object category information, exploiting the intrinsic structure of video data. Our motivation is the observation that image-level labels are much more easily to be acquired than pixel-level labels, and it is natural to find a link between the image level recognition and pixel level classification in video data, which would transfer learned recognition models from one domain to the other one. To this end, this thesis proposes two domain adaptation approaches to adapt the deep convolutional neural network (CNN) image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of unlabelled video data. Our proposed approaches explicitly model and compensate for the domain adaptation from the source domain to the target domain which in turn underpins a robust semantic object segmentation method for natural videos. We demonstrate the superior performance of our methods by presenting extensive evaluations on challenging datasets comparing with the state-of-the-art methods.
Resumo:
In the past few years, human facial age estimation has drawn a lot of attention in the computer vision and pattern recognition communities because of its important applications in age-based image retrieval, security control and surveillance, biomet- rics, human-computer interaction (HCI) and social robotics. In connection with these investigations, estimating the age of a person from the numerical analysis of his/her face image is a relatively new topic. Also, in problems such as Image Classification the Deep Neural Networks have given the best results in some areas including age estimation. In this work we use three hand-crafted features as well as five deep features that can be obtained from pre-trained deep convolutional neural networks. We do a comparative study of the obtained age estimation results with these features.
Resumo:
Dopo lo sviluppo dei primi casi di Covid-19 in Cina nell’autunno del 2019, ad inizio 2020 l’intero pianeta è precipitato in una pandemia globale che ha stravolto le nostre vite con conseguenze che non si vivevano dall’influenza spagnola. La grandissima quantità di paper scientifici in continua pubblicazione sul coronavirus e virus ad esso affini ha portato alla creazione di un unico dataset dinamico chiamato CORD19 e distribuito gratuitamente. Poter reperire informazioni utili in questa mole di dati ha ulteriormente acceso i riflettori sugli information retrieval systems, capaci di recuperare in maniera rapida ed efficace informazioni preziose rispetto a una domanda dell'utente detta query. Di particolare rilievo è stata la TREC-COVID Challenge, competizione per lo sviluppo di un sistema di IR addestrato e testato sul dataset CORD19. Il problema principale è dato dal fatto che la grande mole di documenti è totalmente non etichettata e risulta dunque impossibile addestrare modelli di reti neurali direttamente su di essi. Per aggirare il problema abbiamo messo a punto nuove soluzioni self-supervised, a cui abbiamo applicato lo stato dell'arte del deep metric learning e dell'NLP. Il deep metric learning, che sta avendo un enorme successo soprattuto nella computer vision, addestra il modello ad "avvicinare" tra loro immagini simili e "allontanare" immagini differenti. Dato che sia le immagini che il testo vengono rappresentati attraverso vettori di numeri reali (embeddings) si possano utilizzare le stesse tecniche per "avvicinare" tra loro elementi testuali pertinenti (e.g. una query e un paragrafo) e "allontanare" elementi non pertinenti. Abbiamo dunque addestrato un modello SciBERT con varie loss, che ad oggi rappresentano lo stato dell'arte del deep metric learning, in maniera completamente self-supervised direttamente e unicamente sul dataset CORD19, valutandolo poi sul set formale TREC-COVID attraverso un sistema di IR e ottenendo risultati interessanti.
Resumo:
Many real-word decision- making problems are defined based on forecast parameters: for example, one may plan an urban route by relying on traffic predictions. In these cases, the conventional approach consists in training a predictor and then solving an optimization problem. This may be problematic since mistakes made by the predictor may trick the optimizer into taking dramatically wrong decisions. Recently, the field of Decision-Focused Learning overcomes this limitation by merging the two stages at training time, so that predictions are rewarded and penalized based on their outcome in the optimization problem. There are however still significant challenges toward a widespread adoption of the method, mostly related to the limitation in terms of generality and scalability. One possible solution for dealing with the second problem is introducing a caching-based approach, to speed up the training process. This project aims to investigate these techniques, in order to reduce even more, the solver calls. For each considered method, we designed a particular smart sampling approach, based on their characteristics. In the case of the SPO method, we ended up discovering that it is only necessary to initialize the cache with only several solutions; those needed to filter the elements that we still need to properly learn. For the Blackbox method, we designed a smart sampling approach, based on inferred solutions.
Resumo:
Ecological science contributes to solving a broad range of environmental problems. However, lack of ecological literacy in practice often limits application of this knowledge. In this paper, we highlight a critical but often overlooked demand on ecological literacy: to enable professionals of various careers to apply scientific knowledge when faced with environmental problems. Current university courses on ecology often fail to persuade students that ecological science provides important tools for environmental problem solving. We propose problem-based learning to improve the understanding of ecological science and its usefulness for real-world environmental issues that professionals in careers as diverse as engineering, public health, architecture, social sciences, or management will address. Courses should set clear learning objectives for cognitive skills they expect students to acquire. Thus, professionals in different fields will be enabled to improve environmental decision-making processes and to participate effectively in multidisciplinary work groups charged with tackling environmental issues.
Resumo:
PURPOSE: To determine the mean critical fusion frequency and the short-term fluctuation, to analyze the influence of age, gender, and the learning effect in healthy subjects undergoing flicker perimetry. METHODS: Study 1 - 95 healthy subjects underwent flicker perimetry once in one eye. Mean critical fusion frequency values were compared between genders, and the influence of age was evaluated using linear regression analysis. Study 2 - 20 healthy subjects underwent flicker perimetry 5 times in one eye. The first 3 sessions were separated by an interval of 1 to 30 days, whereas the last 3 sessions were performed within the same day. The first 3 sessions were used to investigate the presence of a learning effect, whereas the last 3 tests were used to calculate short-term fluctuation. RESULTS: Study 1 - Linear regression analysis demonstrated that mean global, foveal, central, and critical fusion frequency per quadrant significantly decreased with age (p<0.05).There were no statistically significant differences in mean critical fusion frequency values between males and females (p>0.05), with the exception of the central area and inferonasal quadrant (p=0.049 and p=0.011, respectively), where the values were lower in females. Study 2 - Mean global (p=0.014), central (p=0.008), and peripheral (p=0.03) critical fusion frequency were significantly lower in the first session compared to the second and third sessions. The mean global short-term fluctuation was 5.06±1.13 Hz, the mean interindividual and intraindividual variabilities were 11.2±2.8% and 6.4±1.5%, respectively. CONCLUSION: This study suggests that, in healthy subjects, critical fusion frequency decreases with age, that flicker perimetry is associated with a learning effect, and that a moderately high short-term fluctuation is expected.
Resumo:
Two case studies are presented to describe the process of public school teachers authoring and creating chemistry simulations. They are part of the Virtual Didactic Laboratory for Chemistry, a project developed by the School of the Future of the University of Sao Paulo. the documental analysis of the material produced by two groups of teachers reflects different selection process for both themes and problem-situations when creating simulations. The study demonstrates the potential for chemistry learning with an approach that takes students' everyday lives into account and is based on collaborative work among teachers and researches. Also, from the teachers' perspectives, the possibilities of interaction that a simulation offers for classroom activities are considered.
Resumo:
Introduction. The ToLigado Project - Your School Interactive Newspaper is an interactive virtual learning environment conceived, developed, implemented and supported by researchers at the School of the Future Research Laboratory of the University of Sao Paulo, Brazil. Method. This virtual learning environment aims to motivate trans-disciplinary research among public school students and teachers in 2,931 schools equipped with Internet-access computer rooms. Within this virtual community, students produce collective multimedia research documents that are immediately published in the portal. The project also aims to increase students' autonomy for research, collaborative work and Web authorship. Main sections of the portal are presented and described. Results. Partial results of the first two years' implementation are presented and indicate a strong motivation among students to produce knowledge despite the fragile hardware and software infrastructure at the time. Discussion. In this new environment, students should be seen as 'knowledge architects' and teachers as facilitators, or 'curiosity managers'. The ToLigado portal may constitute a repository for future studies regarding student attitudes in virtual learning environments, students' behaviour as 'authors', Web authorship involving collective knowledge production, teachers' behaviour as facilitators, and virtual learning environments as digital repositories of students' knowledge construction and social capital in virtual learning communities.
Resumo:
In a local production system (LPS), besides external economies, the interaction, cooperation, and learning are indicated by the literature as complementary ways of enhancing the LPS's competitiveness and gains. In Brazil, the greater part of LPSs, mostly composed by small enterprises, displays incipient relationships and low levels of interaction and cooperation among their actors. The size of the participating enterprises itself for specificities that engender organizational constraints, which, in turn, can have a considerable impact on their relationships and learning dynamics. For that reason, it is the purpose of this article to present an analysis of interaction, cooperation, and learning relationships among several types of actors pertaining to an LPS in the farming equipment and machinery sector, bearing in mind the specificities of small enterprises. To this end, the fieldwork carried out in this study aimed at: (i) investigating external and internal knowledge sources conducive to learning and (ii) identifying and analyzing motivating and inhibiting factors related to specificities of small enterprises in order to bring the LPS members closer together and increase their cooperation and interaction. Empirical evidence shows that internal aspects of the enterprises, related to management and infrastructure, can have a strong bearing on their joint actions, interaction and learning processes.
Resumo:
Souza MA, Souza MH, Palheta RC Jr, Cruz PR, Medeiros BA, Rola FH, Magalhaes PJ, Troncon LE, Santos AA. Evaluation of gastrointestinal motility in awake rats: a learning exercise for undergraduate biomedical students. Adv Physiol Educ 33: 343-348, 2009; doi: 10.1152/advan.90176.2008.-Current medical curricula devote scarce time for practical activities on digestive physiology, despite frequent misconceptions about dyspepsia and dysmotility phenomena. Thus, we designed a hands-on activity followed by a small-group discussion on gut motility. Male awake rats were randomly submitted to insulin, control, or hypertonic protocols. Insulin and control rats were gavage fed with 5% glucose solution, whereas hypertonic-fed rats were gavage fed with 50% glucose solution. Insulin treatment was performed 30 min before a meal. All meals (1.5 ml) contained an equal mass of phenol red dye. After 10, 15, or 20 min of meal gavage, rats were euthanized. Each subset consisted of six to eight rats. Dye recovery in the stomach and proximal, middle, and distal small intestine was measured by spectrophotometry, a safe and reliable method that can be performed by minimally trained students. In a separate group of rats, we used the same protocols except that the test meal contained (99m)Tc as a marker. Compared with control, the hypertonic meal delayed gastric emptying and gastrointestinal transit, whereas insulinic hypoglycemia accelerated them. The session helped engage our undergraduate students in observing and analyzing gut motor behavior. In conclusion, the fractional dye retention test can be used as a teaching tool to strengthen the understanding of basic physiopathological features of gastrointestinal motility.