88 resultados para computer-based
Resumo:
A new PID tuning and controller approach is introduced for Hammerstein systems based on input/output data. A B-spline neural network is used to model the nonlinear static function in the Hammerstein system. The control signal is composed of a PID controller together with a correction term. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on the B-spline neural networks and the associated Jacobians matrix are calculated using the De Boor algorithms including both the functional and derivative recursions. A numerical example is utilized to demonstrate the efficacy of the proposed approaches.
Resumo:
This paper describes the implementation of a semantic web search engine on conversation styled transcripts. Our choice of data is Hansard, a publicly available conversation style transcript of parliamentary debates. The current search engine implementation on Hansard is limited to running search queries based on keywords or phrases hence lacks the ability to make semantic inferences from user queries. By making use of knowledge such as the relationship between members of parliament, constituencies, terms of office, as well as topics of debates the search results can be improved in terms of both relevance and coverage. Our contribution is not algorithmic instead we describe how we exploit a collection of external data sources, ontologies, semantic web vocabularies and named entity extraction in the analysis of underlying semantics of user queries as well as the semantic enrichment of the search index thereby improving the quality of results.
Resumo:
We present a method of simulating both the avalanche and surge components of pyroclastic flows generated by lava collapsing from a growing Pelean dome. This is used to successfully model the pyroclastic flows generated on 12 May 1996 by the Soufriere Hills volcano, Montserrat. In simulating the avalanche component we use a simple 3-fold parameterisation of flow acceleration for which we choose values using an inverse method. The surge component is simulated by a 1D hydraulic balance of sedimentation of clasts and entrainment of air away from the avalanche source. We show how multiple simulations based on uncertainty of the starting conditions and parameters, specifically location and size (mass flux), could be used to map hazard zones.
Resumo:
Long Term Evolution based networks lack native support for Circuit Switched (CS) services. The Evolved Packet System (EPS) which includes the Evolved UMTS Terrestrial Radio Access Network (E-UTRAN) and Evolved Packet Core (EPC) is a purely all-IP packet system. This introduces the problem of how to provide voice call support when a user is within an LTE network and how to ensure voice service continuity when the user moves out of LTE coverage area. Different technologies have been proposed for the purpose of providing a voice to LTE users and to ensure the service continues outside LTE networks. The aim of this paper is to analyze and evaluate the overall performance of these technologies along with Single Radio Voice Call Continuity (SRVCC) Inter-RAT handover to Universal Terrestrial Radio Access Networks/ GSM-EDGE radio access Networks (UTRAN/GERAN). The possible solutions for providing voice call and service continuity over LTE-based networks are Circuit Switched Fall Back (CSFB), Voice over LTE via Generic Access (VoLGA), Voice over LTE (VoLTE) based on IMS/MMTel with SRVCC and Over The Top (OTT) services like Skype. This paper focuses mainly on the 3GPP standard solutions to implement voice over LTE. The paper compares various aspects of these solutions and suggests a possible roadmap that mobile operators can adopt to provide seamless voice over LTE.
Resumo:
Expert systems have been increasingly popular for commercial importance. A rule based system is a special type of an expert system, which consists of a set of ‘if-then‘ rules and can be applied as a decision support system in many areas such as healthcare, transportation and security. Rule based systems can be constructed based on both expert knowledge and data. This paper aims to introduce the theory of rule based systems especially on categorization and construction of such systems from a conceptual point of view. This paper also introduces rule based systems for classification tasks in detail.
Resumo:
Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.
Resumo:
Various complex oscillatory processes are involved in the generation of the motor command. The temporal dynamics of these processes were studied for movement detection from single trial electroencephalogram (EEG). Autocorrelation analysis was performed on the EEG signals to find robust markers of movement detection. The evolution of the autocorrelation function was characterised via the relaxation time of the autocorrelation by exponential curve fitting. It was observed that the decay constant of the exponential curve increased during movement, indicating that the autocorrelation function decays slowly during motor execution. Significant differences were observed between movement and no moment tasks. Additionally, a linear discriminant analysis (LDA) classifier was used to identify movement trials with a peak accuracy of 74%.
Resumo:
ESA’s first multi-satellite mission Cluster is unique in its concept of 4 satellites orbiting in controlled formations. This will give an unprecedented opportunity to study structure and dynamics of the magnetosphere. In this paper we discuss ways in which ground-based remote-sensing observations of the ionosphere can be used to support the multipoint in-situ satellite measurements. There are a very large number of potentially useful configurations between the satellites and any one ground-based observatory; however, the number of ideal occurrences for any one configuration is low. Many of the ground-based instruments cannot operate continuously and Cluster will take data only for a part of each orbit, depending on how much high-resolution (‘burst-mode’) data are acquired. In addition, there are a great many instrument modes and the formation, size and shape of the cluster of the four satellites to consider. These circumstances create a clear and pressing need for careful planning to ensure that the scientific return from Cluster is maximised by additional coordinated ground-based observations. For this reason, ESA established a working group to coordinate the observations on the ground with Cluster. We will give a number of examples how the combined spacecraft and ground-based observations can address outstanding questions in magnetospheric physics. An online computer tool has been prepared to allow for the planning of conjunctions and advantageous constellations between the Cluster spacecraft and individual or combined ground-based systems. During the mission a ground-based database containing index and summary data will help to identify interesting datasets and allow to select intervals for coordinated studies. We illustrate the philosophy of our approach, using a few important examples of the many possible configurations between the satellite and the ground-based instruments.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
The notion that learning can be enhanced when a teaching approach matches a learner’s learning style has been widely accepted in classroom settings since the latter represents a predictor of student’s attitude and preferences. As such, the traditional approach of ‘one-size-fits-all’ as may be applied to teaching delivery in Educational Hypermedia Systems (EHSs) has to be changed with an approach that responds to users’ needs by exploiting their individual differences. However, establishing and implementing reliable approaches for matching the teaching delivery and modalities to learning styles still represents an innovation challenge which has to be tackled. In this paper, seventy six studies are objectively analysed for several goals. In order to reveal the value of integrating learning styles in EHSs, different perspectives in this context are discussed. Identifying the most effective learning style models as incorporated within AEHSs. Investigating the effectiveness of different approaches for modelling students’ individual learning traits is another goal of this study. Thus, the paper highlights a number of theoretical and technical issues of LS-BAEHSs to serve as a comprehensive guidance for researchers who interest in this area.
Resumo:
In this paper, a new paradigm is presented, to improve the performance of audio-based P300 Brain-computer interfaces (BCIs), by using spatially distributed natural sound stimuli. The new paradigm was compared to a conventional paradigm using spatially distributed sound to demonstrate the performance of this new paradigm. The results show that the new paradigm enlarged the N200 and P300 components, and yielded significantly better BCI performance than the conventional paradigm.
Resumo:
A fully automated and online artifact removal method for the electroencephalogram (EEG) is developed for use in brain-computer interfacing. The method (FORCe) is based upon a novel combination of wavelet decomposition, independent component analysis, and thresholding. FORCe is able to operate on a small channel set during online EEG acquisition and does not require additional signals (e.g. electrooculogram signals). Evaluation of FORCe is performed offline on EEG recorded from 13 BCI particpants with cerebral palsy (CP) and online with three healthy participants. The method outperforms the state-of the-art automated artifact removal methods Lagged auto-mutual information clustering (LAMIC) and Fully automated statistical thresholding (FASTER), and is able to remove a wide range of artifact types including blink, electromyogram (EMG), and electrooculogram (EOG) artifacts.