117 resultados para COMPUTER SCIENCE, THEORY


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is concerned with the delay-range-dependent stability analysis for neural networks with time-varying delay and Markovian jumping parameters. The time-varying delay is assumed to lie in an interval of lower and upper bounds. The Markovian jumping parameters are introduced in delayed neural networks, which are modeled in a continuous-time along with finite-state Markov chain. Moreover, the sufficient condition is derived in terms of linear matrix inequalities based on appropriate Lyapunov-Krasovskii functionals and stochastic stability theory, which guarantees the globally asymptotic stable condition in the mean square. Finally, a numerical example is provided to validate the effectiveness of the proposed conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Driving phenomenon is a repetitive process, that permits sequential learning under identifying the proper change periods. Sequential filtering is widely used for tracking and prediction of state dynamics. However, it suffers at abrupt changes, which cause sudden incremental prediction error. We provide a sequential filtering approach using online Bayesian detection of change points to decrease prediction error generally, and specifically at abrupt changes. The approach learns from optimally detected segments for identifying driving behaviour. Change points detection is done by the Pruned Exact Linear Time algorithm. Computational cost of our approach is bounded by the cost of the implemented sequential filter. This computational performance is suitable to the online nature of motion simulator's delay reduction. The approach was tested on a simulated driving scenario using Vortex by CM Labs. The state dimensions are simulated 2D space coordinates, and velocity. Particle filter was used for online sequential filtering. Prediction results show that change-point detection improves the quality of state estimation compared to traditional sequential filters, and is more suitable for predicting behavioural activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Motion Cueing Algorithm (MCA) transforms longitudinal and rotational motions into simulator movement, aiming to regenerate high fidelity motion within the simulators physical limitations. Classical washout filters are widely used in commercial simulators because of their relative simplicity and reasonable performance. The main drawback of classical washout filters is the inappropriate empirical parameter tuning method that is based on trial-and-error, and is effected by programmers’ experience. This is the most important obstacle to exploiting the platform efficiently. Consequently, the conservative motion produces false cue motions. Lack of consideration for human perception error is another deficiency of classical washout filters and also there is difficulty in understanding the effect of classical washout filter parameters on generated motion cues. The aim of this study is to present an effortless optimization method for adjusting the classical MCA parameters, based on the Genetic Algorithm (GA) for a vehicle simulator in order to minimize human sensation error between the real and simulator driver while exploiting the platform within its physical limitations. The vestibular sensation error between the real and simulator driver as well as motion limitations have been taken into account during optimization. The proposed optimized MCA based on GA is implemented in MATLAB/Simulink. The results show the superiority of the proposed MCA as it improved the human sensation, maximized reference signal shape following and exploited the platform more efficiently within the motion constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper focuses on designing an adaptive controller for controlling traffic signal timing. Urban traffic is an inevitable part in modern cities and traffic signal controllers are effective tools to control it. In this regard, this paper proposes a distributed neural network (NN) controller for traffic signal timing. This controller applies cuckoo search (CS) optimization methods to find the optimal parameters in design of an adaptive traffic signal timing control system. The evaluation of the performance of the designed controller is done in a multi-intersection traffic network. The developed controller shows a promising improvement in reducing travel delay time compared to traditional fixed-time control systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a Q-learning based controller for a network of multi intersections. According to the increasing amount of traffic congestion in modern cities, using an efficient control system is demanding. The proposed controller designed to adjust the green time for traffic signals by the aim of reducing the vehicles’ travel delay time in a multi-intersection network. The designed system is a distributed traffic timing control model, applies individual controller for each intersection. Each controller adjusts its own intersection’s congestion while attempt to reduce the travel delay time in whole traffic network. The results of experiments indicate the satisfied efficiency of the developed distributed Q-learning controller.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is crucial for a neuron spike sorting algorithm to cluster data from different neurons efficiently. In this study, the search capability of the Genetic Algorithm (GA) is exploited for identifying the optimal feature subset for neuron spike sorting with a clustering algorithm. Two important objectives of the optimization process are considered: to reduce the number of features and increase the clustering performance. Specifically, we employ a binary GA with the silhouette evaluation criterion as the fitness function for neuron spike sorting using the Super-Paramagnetic Clustering (SPC) algorithm. The clustering results of SPC with and without the GA-based feature selector are evaluated using benchmark synthetic neuron spike data sets. The outcome indicates the usefulness of the GA in identifying a smaller feature set with improved clustering performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural network (ANN) models are able to predict future events based on current data. The usefulness of an ANN lies in the capacity of the model to learn and adjust the weights following previous errors during training. In this study, we carefully analyse the existing methods in neuronal spike sorting algorithms. The current methods use clustering as a basis to establish the ground truths, which requires tedious procedures pertaining to feature selection and evaluation of the selected features. Even so, the accuracy of clusters is still questionable. Here, we develop an ANN model to specially address the present drawbacks and major challenges in neuronal spike sorting. New enhancements are introduced into the conventional backpropagation ANN for determining the network weights, input nodes, target node, and error calculation. Coiflet modelling of noise is employed to enhance the spike shape features and overshadow noise. The ANN is used in conjunction with a special spiking event detection technique to prioritize the targets. The proposed enhancements are able to bolster the training concept, and on the whole, contributing to sorting neuronal spikes with close approximations.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cyber physical system (CPS), computational resources and physical resources are strongly correlated and mutually dependent. Cascading failures occur between coupled networks, cause the system more fragile than single network. Besides widely used metric giant component, we study small cluster (small component) in interdependent networks after cascading failures occur. We first introduce an overview on how small clusters distribute in various single networks. Then we propose a percolation theory based mathematical method to study how small clusters be affected by the interdependence between two coupled networks. We prove that the upper bounds exist for both the fraction and the number of operating small clusters. Without loss of generality, we apply both synthetic network and real network data in simulation to study small clusters under different interdependence models and network topologies. The extensive simulations highlight our findings: except the giant component, considerable proportion of small clusters exists, with the remaining part fragmenting to very tiny pieces or even massive isolated single vertex; no matter how the two networks are tightly coupled, an upper bound exists for the size of small clusters. We also discover that the interdependent small-world networks generally have the highest fractions of operating small clusters. Three attack strategies are compared: Inter Degree Priority Attack, Intra Degree Priority Attack and Random Attack. We observe that the fraction of functioning small clusters keeps stable and is independent from the attack strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing popularity of utility-oriented computing where the resources are traded as services, efficient management of quality of service (QoS) has become increasingly significant to both service consumers and service providers. In the context of distributed multimedia content adaptation deployment on service-oriented computing, how to ensure the stringent QoS requirements of the content adaptation is a significant and immediate challenge. However, QoS guarantees in the distributed multimedia content adaptation deployment on service-oriented platform context have not been accorded the attention it deserves. In this paper, we address this problem. We formulate the SLA management for distributed multimedia content adaptation deployment on service-oriented computing as an integer programming problem. We propose an SLA management framework that enables the service provider to determine deliverable QoS before settling SLA with potential service consumers to optimize QoS guarantees. We analyzed the performance of the proposed strategy under various conditions in terms of the SLA success rate, rejection rate and impact of the resource data errors on potential violation of the agreed upon SLA. We also compared the proposed SLA management framework with a baseline approach in which the distributed multimedia content adaptation is deployed on a service-oriented platform without SLA consideration. The results of the experiments show that the proposed SLA management framework substantially outperforms the baseline approach confirming that SLA management is a core requirement for the deployment of distributed multimedia content adaptation on service-oriented systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing complexity of computer systems and communication networks induces tremendous requirements for trust and security. This special issue includes topics on trusted computing, risk and reputation management, network security and survivable computer systems/networks. These issues have evolved into an active and important area of research and development. The past decade has witnessed a proliferation of concurrency and computation systems for practice of highly trust, security and privacy, which has become a key subject in determining future research and development activities in many academic and industrial branches. This special issue aims to present and discuss advances of current research and development in all aspects of trusted computing and network security. In addition, this special issue provides snapshots of contemporary academia work in the field of network trusted computing. We prepared and organized this special issue to record state-of-the-art research, novel development and trends for future insight in this domain. In this special issue, 14âpapers have been accepted for publication, which demonstrate novel and original work in this field. A detailed overview of the selected works is given below.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feature based camera model identification plays an important role for forensics investigations on images. The conventional feature based identification schemes suffer from the problem of unknown models, that is, some images are captured by the camera models previously unknown to the identification system. To address this problem, we propose a new scheme: Source Camera Identification with Unknown models (SCIU). It has the capability of identifying images of the unknown models as well as distinguishing images of the known models. The new SCIU scheme consists of three stages: 1) unknown detection; 2) unknown expansion; and 3) (K+1)-class classification. Unknown detection applies a k-nearest neighbours method to recognize a few sample images of unknown models from the unlabeled images. Unknown expansion further extends the set of unknown sample images using a self-training strategy. Then, we address a specific (K+1)-class classification, in which the sample images of unknown (1-class) and known models (K-class) are combined to train a classifier. In addition, we develop a parameter optimization method for unknown detection, and investigate the stopping criterion for unknown expansion. The experiments carried out on the Dresden image collection confirm the effectiveness of the proposed SCIU scheme. When unknown models present, the identification accuracy of SCIU is significantly better than the four state-of-art methods: 1) multi-class Support Vector Machine (SVM); 2) binary SVM; 3) combined classification framework; and 4) decision boundary carving.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the operator of a power system, having an accurate forecast of the day-ahead load is imperative in order to guaranty the reliability of supply and also to minimize generation costs and pollution. Furthermore, in a restructured power system, other parties, like utility companies, large consumers and in some cases even ordinary consumers, can benefit from a higher quality demand forecast. In this paper, the application of smart meter data for producing more accurate load forecasts has been discussed. First an ordinary neural network model is used to generate a forecast for the total load of a number of consumers. The results of this step are used as a benchmark for comparison with the forecast results of a more sophisticated method. In this new method, using wavelet decomposition and a clustering technique called interactive k-means, the consumers are divided into a number of clusters. Then for each cluster an individual neural network is trained. Consequently, by adding the outputs of all of the neural networks, a forecast for the total load is generated. A comparison between the forecast using a single model and the forecast generated by the proposed method, proves that smart meter data can be used to significantly improve the quality of load forecast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of mobile application specific testing techniques and methods has been attracting much attention of software engineers over the past few years. This is due to the fact that mobile applications are different than traditional web and desktop applications, and more and more they are moving to being used in critical domains. Mobile applications require a different approach to application quality and dependability and require an effective testing approach to build high quality and more reliable software. We performed a systematic mapping study to categorize and to structure the research evidence that has been published in the area of mobile application testing techniques and challenges that they have reported. Seventy nine (79) empirical studies are mapped to a classification schema. Several research gaps are identified and specific key testing issues for practitioners are identified: there is a need for eliciting testing requirements early during development process; the need to conduct research in real-world development environments; specific testing techniques targeting application life-cycle conformance and mobile services testing; and comparative studies for security and usability testing.