12 resultados para statistical learning

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With a significant increment of the number of digital cameras used for various purposes, there is a demanding call for advanced video analysis techniques that can be used to systematically interpret and understand the semantics of video contents, which have been recorded in security surveillance, intelligent transportation, health care, video retrieving and summarization. Understanding and interpreting human behaviours based on video analysis have observed competitive challenges due to non-rigid human motion, self and mutual occlusions, and changes of lighting conditions. To solve these problems, advanced image and signal processing technologies such as neural network, fuzzy logic, probabilistic estimation theory and statistical learning have been overwhelmingly investigated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Three experiments examined children’s and adults’ abilities to use statistical and temporal information to distinguish between common cause and causal chain structures. In Experiment 1, participants were provided with conditional probability information and/or temporal information and asked to infer the causal structure of a three-variable mechanical system that operated probabilistically. Participants of all ages preferentially relied on the temporal pattern of events in their inferences, even if this conflicted with statistical information. In Experiments 2 and 3, participants observed a series of interventions on the system, which in these experiments operated deterministically. In Experiment 2, participants found it easier to use temporal pattern information than statistical information provided as a result of interventions. In Experiment 3, in which no temporal pattern information was provided, children from 6-7 years, but not younger children, were able to use intervention information to make causal chain judgments, although they had difficulty when the structure was a common cause. The findings suggest that participants, and children in particular, may find it more difficult to use statistical information than temporal pattern information because of its demands on information processing resources. However, there may also be an inherent preference for temporal information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we intoduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre-and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean leaxning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article examines the relationship between the learning organisation and the implementation of curriculum innovation within schools. It also compares the extent of innovative activity undertaken by schools in the public and the private sectors. A learning organisation is characterised by long-term goals, participatory decision-making processes, collaboration with external stakeholders, effective mechanisms for the internal communication of knowledge and information, and the use of rewards for its members. These characteristics are expected to promote curriculum innovation, once a number of control factors have been taken into account. The article reports on a study carried out in 197 Greek public and private primary schools in the 1999-2000 school year. Structured interviews with school principals were used as a method of data collection. According to the statistical results, the most important determinants of the innovative activity of a school are the extent of its collaboration with other organisations (i.e. openness to society), and the implementation of development programmes for teachers and parents (i.e. communication of knowledge and information). Contrary to expectations, the existence of long-term goals, the extent of shared decision-making, and the use of teacher rewards had no impact on curriculum innovation. The study also suggests that the private sector, as such, has an additional positive effect on the implementation of curriculum innovation, once a number of human, financial, material, and management resources have been controlled for. The study concludes by making recommendations for future research that would shed more light on unexpected outcomes and would help explore the causal link between variables in the research model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research explored the influence of children’s perceptions of a pro-social behavior after-school program on actual change in the children’s behavioral outcomes over the program’s duration. Children’s perceptions of three program processes were collected as well as self-reported pro-social and anti-social behavior before and after the program. Statistical models showed that: Positive perceptions of the program facilitators’ dispositions significantly predicted reductions in anti-social behavior; and positive perceptions with the program activities significantly predicted gains in pro-social behavior. The children’s perceptions of their peers’ behavior in the sessions were not found to a significant predictor of behavioral change. The two significant perceptual indicators predicted a small percentage of the change in the behavioral outcomes. However, as after-school social learning programs have a research history of problematic implementation children’s perceptions should be considered in future program design, evaluation and monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modeling problems require to estimate a scalar output from one or more time series. Such problems are usually tackled by extracting a fixed number of features from the time series (like their statistical moments), with a consequent loss in information that leads to suboptimal predictive models. Moreover, feature extraction techniques usually make assumptions that are not met by real world settings (e.g. uniformly sampled time series of constant length), and fail to deliver a thorough methodology to deal with noisy data. In this paper a methodology based on functional learning is proposed to overcome the aforementioned problems; the proposed Supervised Aggregative Feature Extraction (SAFE) approach allows to derive continuous, smooth estimates of time series data (yielding aggregate local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The SAFE paradigm enjoys several properties like closed form solution, incorporation of first and second order derivative information into the regressor matrix, interpretability of the generated functional predictor and the possibility to exploit Reproducing Kernel Hilbert Spaces setting to yield nonlinear predictive models. Simulation studies are provided to highlight the strengths of the new methodology w.r.t. standard unsupervised feature selection approaches. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, wide-field sky surveys providing deep multi-band imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SN): systematic light curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 years and classified using both spectroscopy and machine learning-based photometric techniques. We develop and apply a new Bayesian model for the full multi-band evolution of each light curve in the sample. We find no evidence of a sub-population of fast-declining explosions (historically referred to as "Type IIL" SNe). However, we identify a highly significant relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for supernova cosmology, offering a standardizable candle good to an intrinsic scatter of 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light curve properties and an expanded grid of progenitor properties, are needed to enable robust progenitor inferences from multi-band light curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide field transient searches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines whether virtual reality (VR) is more superior to paper-based instructions in increasing the speed at which individuals learn a new assembly task. Specifically, the work seeks to quantify any learning benefits when individuals have been given the opportunity and compares the performance of two groups using virtual and hardcopy media types to pre-learn the task. A build experiment based on multiple builds of an aircraft panel showed that a group of people who pre-learned the assembly task using a VR environment completed their builds faster (average build time 29.5% lower). The VR group also made fewer references to instructional materials (average number of references 38% lower) and made fewer errors than a group using more traditional, hard copy instructions. These outcomes were more pronounced during build one with differences in build time and number of references showing limited statistical differences.