13 resultados para Human Machine Interfaces
Resumo:
In an age of depleting oil reserves and increasing energy demand, humanity faces a stalemate between environmentalism and politics, where crude oil is traded at record highs yet the spotlight on being ‘green’ and sustainable is stronger than ever. A key theme on today’s political agenda is energy independence from foreign nations, and the United Kingdom is bracing itself for nuclear renaissance which is hoped will feed the rapacious centralised system that the UK is structured upon. But what if this centralised system was dissembled, and in its place stood dozens of cities which grow and monopolise from their own energy? Rather than one dominant network, would a series of autonomous city-based energy systems not offer a mutually profitable alternative? Bio-Port is a utopian vision of a ‘Free Energy City’ set in Liverpool, where the old dockyards, redundant space, and the Mersey Estuary have been transformed into bio-productive algae farms. Bio-Port Free Energy City is a utopian ideal, where energy is superfluous; in fact so abundant that meters are obsolete. The city functions as an energy generator and thrives from its own product with minimal impact upon the planet it inhabits. Algaculture is the fundamental energy source, where a matrix of algae reactors swamp the abandoned dockyards; which themselves have been further expanded and reclaimed from the River Mersey. Each year, the algae farm is capable of producing over 200 million gallons of bio-fuel, which in-turn can produce enough electricity to power almost 2 million homes. The metabolism of Free-Energy City is circular and holistic, where the waste products of one process are simply the inputs of a new one. Livestock farming – once traditionally a high-carbon countryside exercise has become urbanised. Cattle are located alongside the algae matrix, and waste gases emitted by farmyards and livestock are largely sequestered by algal blooms or anaerobically converted to natural gas. Bio-Port Free Energy City mitigates the imbalances between ecology and urbanity, and exemplifies an environment where nature and the human machine can function productively and in harmony with one another. According to James Lovelock, our population has grown in number to the point where our presence is perceptibly disabling the planet, but in order to reverse the effects of our humanist flaws, it is vital that new eco-urban utopias are realised.
Resumo:
This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.
Resumo:
Credal nets are probabilistic graphical models which extend Bayesian nets to cope with sets of distributions. This feature makes the model particularly suited for the implementation of classifiers and knowledge-based systems. When working with sets of (instead of single) probability distributions, the identification of the optimal option can be based on different criteria, some of them eventually leading to multiple choices. Yet, most of the inference algorithms for credal nets are designed to compute only the bounds of the posterior probabilities. This prevents some of the existing criteria from being used. To overcome this limitation, we present two simple transformations for credal nets which make it possible to compute decisions based on the maximality and E-admissibility criteria without any modification in the inference algorithms. We also prove that these decision problems have the same complexity of standard inference, being NP^PP-hard for general credal nets and NP-hard for polytrees.
Resumo:
The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.
Resumo:
The grading of crushed aggregate is carried out usually by sieving. We describe a new image-based approach to the automatic grading of such materials. The operational problem addressed is where the camera is located directly over a conveyor belt. Our approach characterizes the information content of each image, taking into account relative variation in the pixel data, and resolution scale. In feature space, we find very good class separation using a multidimensional linear classifier. The innovation in this work includes (i) introducing an effective image-based approach into this application area, and (ii) our supervised classification using wavelet entropy-based features.
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.
Resumo:
We report some existing work, inspired by analogies between human thought and machine computation, showing that the informational state of a digital computer can be decoded in a similar way to brain decoding. We then discuss some proposed work that would leverage this analogy to shed light on the amount of information that may be missed by the technical limitations of current neuroimaging technologies. © 2012 Springer-Verlag.
Resumo:
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.