673 resultados para Hybrid feature selections


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article extends the traditions of style-based criticism through an encounter with the insights that can be gained from engaging with filmmakers at work. By bringing into relationship two things normally thought of as separate: production history and disinterested critical analysis, the discussion aims to extend the subjects which criticism can appreciate as well as providing some insights on the creative process. Drawing on close analysis, on observations made during fieldwork and on access to earlier cuts of the film, this article looks at a range of interrelated decision-making anchored by the reading of a particular sequence. The article examines changes the film underwent in the different stages of production, and some of the inventions deployed to ensure key themes and ideas remained in play, as other elements changed. It draws conclusions which reveal perspectives on the filmmaking process, on collaboration, and on the creative response to material realities. The article reveals elements of the complexity of the process of the construction of image and soundtrack, and extends the range of filmmakers’ choices which are part of a critical dialogue. Has a relationship to ‘Sleeping with half open eyes: dreams and realities in The Cry of the Owl’, Movie: A Journal of Film Criticism, 1, (2010) which provides a broader interpretative context for the enquiry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper an attempt has been made to take a look at how the use of implant and electrode technology can now be employed to create biological brains for robots, to enable human enhancement and to diminish the effects of certain neural illnesses. In all cases the end result is to increase the range of abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking the human brain directly with a computer. An overview of some of the latest developments in the field of Brain to Computer Interfacing is also given in order to assess advantages and disadvantages. The emphasis is clearly placed on practical studies that have been and are being undertaken and reported on, as opposed to those speculated, simulated or proposed as future projects. Related areas are discussed briefly only in the context of their contribution to the studies being undertaken. The area of focus is notably the use of invasive implant technology, where a connection is made directly with the cerebral cortex and/or nervous system. Tests and experimentation which do not involve human subjects are invariably carried out a priori to indicate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies from this area are discussed including our own involving neural growth. The paper goes on to describe human experimentation, in which neural implants have linked the human nervous system bi-directionally with technology and the internet. A view is taken as to the prospects for the future for this implantable computing in terms of both therapy and enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a solution to the problems associated with network latency within distributed virtual environments. It begins by discussing the advantages and disadvantages of synchronous and asynchronous distributed models, in the areas of user and object representation and user-to-user interaction. By introducing a hybrid solution, which utilises the concept of a causal surface, the advantages of both synchronous and asynchronous models are combined. Object distortion is a characteristic feature of the hybrid system, and this is proposed as a solution which facilitates dynamic real-time user collaboration. The final section covers implementation details, with reference to a prototype system available from the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spiking neural networks are usually limited in their applications due to their complex mathematical models and the lack of intuitive learning algorithms. In this paper, a simpler, novel neural network derived from a leaky integrate and fire neuron model, the ‘cavalcade’ neuron, is presented. A simulation for the neural network has been developed and two basic learning algorithms implemented within the environment. These algorithms successfully learn some basic temporal and instantaneous problems. Inspiration for neural network structures from these experiments are then taken and applied to process sensor information so as to successfully control a mobile robot.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The built environment in China is required to achieve a 50% reduction in carbon emissions by 2020 against the 1980 design standard. A particular challenge is how to maintain acceptable comfort conditions through the hot humid summers and cold desiccating winters of its continental climate regions. Fully air-conditioned sealed envelopes, often fully glazed, are becoming increasingly common in these regions. Remedial strategies involve technical refinements to the air-handling equipment and a contribution from renewable energy sources in an attempt to achieve the prescribed net reduction in energy use. However an alternative hybrid environmental design strategy is developed in this research project. It exploits observed temperate periods of weeks, days, even hours in duration to free-run an office and exhibition building configured to promote natural stack ventilation when ambient conditions permit and mechanical ventilation when conditions require it, the two modes delivered through the same physical infrastructure. The proposal is modelled in proprietary software and the methodology adopted is described. The challenge is compounded by its first practical application to an existing reinforced concrete frame originally designed to receive a highly glazed envelope. This original scheme is reviewed in comparison. Furthermore the practical delivery of the proposal value engineered out a proportion of the ventilation stacks. The likely consequence of this for the environmental performance of the building is investigated through a sensitivity study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The building sector is one of the highest consumers of energy in the world. This has led to high dependency on using fossil fuel to supply energy without due consideration to its environmental impact. Saudi Arabia has been through rapid development accompanied by population growth, which in turn has increased the demand for construction. However, this fast development has been met without considering sustainable building design. General design practices rely on using international design approaches and features without considering the local climate and aspects of traditional passive design. This is by constructing buildings with a large amount of glass fully exposed to solar radiation. The aim of this paper is to investigate the development of sustainability in passive design and vernacular architecture. Furthermore, it compares them with current building in Saudi Arabia in terms of making the most of the climate. Moreover, it will explore the most sustainable renewable energy that can be used to reduce the environmental impact on modern building in Saudi Arabia. This will be carried out using case studies demonstrating the performance of vernacular design in Saudi Arabia and thus its benefits in terms of environmental, economic and social sustainability. It argues that the adoption of a hybrid approach can improve the energy efficiency as well as reduce the carbon footprint of buildings. This is by combining passive design, learning from the vernacular architecture and implementing innovative sustainable technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology Acceptance Model (TAM) posits that Perceived Ease of Use (PEOU) and Perceived Usefulness (PU) influence the ‘intention to use’. The Post-Acceptance Model (PAM) posits that continued use is influenced by prior experience. In order to study the factors that influence how professionals use complex systems, we create a tentative research model that builds on PAM and TAM. Specifically we include PEOU and the construct ‘Professional Association Guidance’. We postulate that feature usage is enhanced when professional associations influence PU by highlighting additional benefits. We explore the theory in the context of post-adoption use of Electronic Medical Records (EMRs) by primary care physicians in Ontario. The methodology can be extended to other professional environments and we suggest directions for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the development of multi-feature classification techniques used to identify tremor-related characteristics in the Parkinsonian patient. Local field potentials were recorded from the subthalamic nucleus and the globus pallidus internus of eight Parkinsonian patients through the implanted electrodes of a Deep brain stimulation (DBS) device prior to device internalization. A range of signal processing techniques were evaluated with respect to their tremor detection capability and used as inputs in a multi-feature neural network classifier to identify the activity of Parkinsonian tremor. The results of this study show that a trained multi-feature neural network is able, under certain conditions, to achieve excellent detection accuracy on patients unseen during training. Overall the tremor detection accuracy was mixed, although an accuracy of over 86% was achieved in four out of the eight patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. Feature usage is a pre-requisite to realising the benefits of investments in feature rich systems. We propose that conceptualising the dependent variable 'system use' as 'level of use' and specifying it as a formative construct has greater value for measuring the post-adoption use of feature rich systems. We then validate the content of the construct as a first step in developing a research instrument to measure it. The context of our study is the post-adoption use of electronic medical records (EMR) by primary care physicians. Method. Initially, a literature review of the empirical context defines the scope based on prior studies. Having identified core features from the literature, they are further refined with the help of experts in a consensus seeking process that follows the Delphi technique. Results.The methodology was successfully applied to EMRs, which were selected as an example of feature rich systems. A review of EMR usage and regulatory standards provided the feature input for the first round of the Delphi process. A panel of experts then reached consensus after four rounds, identifying ten task-based features that would be indicators of level of use. Conclusions. To study why some users deploy more advanced features than others, theories of post-adoption require a rich formative dependent variable that measures level of use. We have demonstrated that a context sensitive literature review followed by refinement through a consensus seeking process is a suitable methodology to validate the content of this dependent variable. This is the first step of instrument development prior to statistical confirmation with a larger sample.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.