945 resultados para Computer-Aided Engineering


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nonlinear, noisy and outlier characteristics of electroencephalography (EEG) signals inspire the employment of fuzzy logic due to its power to handle uncertainty. This paper introduces an approach to classify motor imagery EEG signals using an interval type-2 fuzzy logic system (IT2FLS) in a combination with wavelet transformation. Wavelet coefficients are ranked based on the statistics of the receiver operating characteristic curve criterion. The most informative coefficients serve as inputs to the IT2FLS for the classification task. Two benchmark datasets, named Ia and Ib, downloaded from the brain-computer interface (BCI) competition II, are employed for the experiments. Classification performance is evaluated using accuracy, sensitivity, specificity and F-measure. Widely-used classifiers, including feedforward neural network, support vector machine, k-nearest neighbours, AdaBoost and adaptive neuro-fuzzy inference system, are also implemented for comparisons. The wavelet-IT2FLS method considerably dominates the comparable classifiers on both datasets, and outperforms the best performance on the Ia and Ib datasets reported in the BCI competition II by 1.40% and 2.27% respectively. The proposed approach yields great accuracy and requires low computational cost, which can be applied to a real-time BCI system for motor imagery data analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traffic congestion in urban roads is one of the biggest challenges of 21 century. Despite a myriad of research work in the last two decades, optimization of traffic signals in network level is still an open research problem. This paper for the first time employs advanced cuckoo search optimization algorithm for optimally tuning parameters of intelligent controllers. Neural Network (NN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) are two intelligent controllers implemented in this study. For the sake of comparison, we also implement Q-learning and fixed-time controllers as benchmarks. Comprehensive simulation scenarios are designed and executed for a traffic network composed of nine four-way intersections. Obtained results for a few scenarios demonstrate the optimality of trained intelligent controllers using the cuckoo search method. The average performance of NN, ANFIS, and Q-learning controllers against the fixed-time controller are 44%, 39%, and 35%, respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Civil infrastructures are critical to every nation, due to their substantial investment, long service period, and enormous negative impacts after failure. However, they inevitably deteriorate during their service lives. Therefore, methods capable of assessing conditions and identifying damage in a structure timely and accurately have drawn increasing attention. Recently, compressive sensing (CS), a significant breakthrough in signal processing, has been proposed to capture and represent compressible signals at a rate significantly below the traditional Nyquist rate. Due to its sound theoretical background and notable influence, this methodology has been successfully applied in many research areas. In order to explore its application in structural damage identification, a new CS-based damage identification scheme is proposed in this paper, by regarding damage identification problems as pattern classification problems. The time domain structural responses are transferred to the frequency domain as sparse representation, and then the numerical simulated data under various damage scenarios will be used to train a feature matrix as input information. This matrix can be used for damage identification through an optimization process. This will be one of the first few applications of this advanced technique to structural engineering areas. In order to demonstrate its effectiveness, numerical simulation results on a complex pipe soil interaction model are used to train the parameters and then to identify the simulated pipe degradation damage and free-spanning damage. To further demonstrate the method, vibration tests of a steel pipe laid on the ground are carried out. The measured acceleration time histories are used for damage identification. Both numerical and experimental verification results confirm that the proposed damage identification scheme will be a promising tool for structural health monitoring.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This brief proposes an efficient technique for the construction of optimized prediction intervals (PIs) by using the bootstrap technique. The method employs an innovative PI-based cost function in the training of neural networks (NNs) used for estimation of the target variance in the bootstrap method. An optimization algorithm is developed for minimization of the cost function and adjustment of NN parameters. The performance of the optimized bootstrap method is examined for seven synthetic and real-world case studies. It is shown that application of the proposed method improves the quality of constructed PIs by more than 28% over the existing technique, leading to narrower PIs with a coverage probability greater than the nominal confidence level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Uncertainty of the electricity prices makes the task of accurate forecasting quite difficult for the electricity market participants. Prediction intervals (PIs) are statistical tools which quantify the uncertainty related to forecasts by estimating the ranges of the future electricity prices. Traditional approaches based on neural networks (NNs) generate PIs at the cost of high computational burden and doubtful assumptions about data distributions. In this work, we propose a novel technique that is not plagued with the above limitations and it generates high-quality PIs in a short time. The proposed method directly generates the lower and upper bounds of the future electricity prices using support vector machines (SVM). Optimal model parameters are obtained by the minimization of a modified PI-based objective function using a particle swarm optimization (PSO) technique. The efficiency of the proposed method is illustrated using data from Ontario, Pennsylvania-New Jersey-Maryland (PJM) interconnection day-ahead and real-time markets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vision based tracking of an object using the ideas of perspective projection inherently consists of nonlinearly modelled measurements although the underlying dynamic system that encompasses the object and the vision sensors can be linear. Based on a necessary stereo vision setting, we introduce an appropriate measurement conversion techniques which subsequently facilitate using a linear filter. Linear filter together with the aforementioned measurement conversion approach conforms a robust linear filter that is based on the set values state estimation ideas; a particularly rich area in the robust control literature. We provide a rigorously theoretical analysis to ensure bounded state estimation errors formulated in terms of an ellipsoidal set in which the actual state is guaranteed to be included to an arbitrary high probability. Using computer simulations as well as a practical implementation consisting of a robotic manipulator, we demonstrate our linear robust filter significantly outperforms the traditionally used extended Kalman filter under this stereo vision scenario. © 2008 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many network applications, the nature of traffic is of burst type. Often, the transient response of network to such traffics is the result of a series of interdependant events whose occurrence prediction is not a trivial task. The previous efforts in IEEE 802.15.4 networks often followed top-down approaches to model those sequences of events, i.e., through making top-view models of the whole network, they tried to track the transient response of network to burst packet arrivals. The problem with such approaches was that they were unable to give station-level views of network response and were usually complex. In this paper, we propose a non-stationary analytical model for the IEEE 802.15.4 slotted CSMA/CA medium access control (MAC) protocol under burst traffic arrival assumption and without the optional acknowledgements. We develop a station-level stochastic time-domain method from which the network-level metrics are extracted. Our bottom-up approach makes finding station-level details such as delay, collision and failure distributions possible. Moreover, network-level metrics like the average packet loss or transmission success rate can be extracted from the model. Compared to the previous models, our model is proven to be of lower memory and computational complexity order and also supports contention window sizes of greater than one. We have carried out extensive and comparative simulations to show the high accuracy of our model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Online social networks (OSN) have become one of the major platforms for people to exchange information. Both positive information (e.g., ideas, news and opinions) and negative information (e.g., rumors and gossips) spreading in social media can greatly influence our lives. Previously, researchers have proposed models to understand their propagation dynamics. However, those were merely simulations in nature and only focused on the spread of one type of information. Due to the human-related factors involved, simultaneous spread of negative and positive information cannot be thought of the superposition of two independent propagations. In order to fix these deficiencies, we propose an analytical model which is built stochastically from a node level up. It can present the temporal dynamics of spread such as the time people check newly arrived messages or forward them. Moreover, it is capable of capturing people's behavioral differences in preferring what to believe or disbelieve. We studied the social parameters impact on propagation using this model. We found that some factors such as people's preference and the injection time of the opposing information are critical to the propagation but some others such as the hearsay forwarding intention have little impact on it. The extensive simulations conducted on the real topologies confirm the high accuracy of our model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As a leading framework for processing and analyzing big data, MapReduce is leveraged by many enterprises to parallelize their data processing on distributed computing systems. Unfortunately, the all-to-all data forwarding from map tasks to reduce tasks in the traditional MapReduce framework would generate a large amount of network traffic. The fact that the intermediate data generated by map tasks can be combined with significant traffic reduction in many applications motivates us to propose a data aggregation scheme for MapReduce jobs in cloud. Specifically, we design an aggregation architecture under the existing MapReduce framework with the objective of minimizing the data traffic during the shuffle phase, in which aggregators can reside anywhere in the cloud. Some experimental results also show that our proposal outperforms existing work by reducing the network traffic significantly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the explosion of big data, processing large numbers of continuous data streams, i.e., big data stream processing (BDSP), has become a crucial requirement for many scientific and industrial applications in recent years. By offering a pool of computation, communication and storage resources, public clouds, like Amazon's EC2, are undoubtedly the most efficient platforms to meet the ever-growing needs of BDSP. Public cloud service providers usually operate a number of geo-distributed datacenters across the globe. Different datacenter pairs are with different inter-datacenter network costs charged by Internet Service Providers (ISPs). While, inter-datacenter traffic in BDSP constitutes a large portion of a cloud provider's traffic demand over the Internet and incurs substantial communication cost, which may even become the dominant operational expenditure factor. As the datacenter resources are provided in a virtualized way, the virtual machines (VMs) for stream processing tasks can be freely deployed onto any datacenters, provided that the Service Level Agreement (SLA, e.g., quality-of-information) is obeyed. This raises the opportunity, but also a challenge, to explore the inter-datacenter network cost diversities to optimize both VM placement and load balancing towards network cost minimization with guaranteed SLA. In this paper, we first propose a general modeling framework that describes all representative inter-task relationship semantics in BDSP. Based on our novel framework, we then formulate the communication cost minimization problem for BDSP into a mixed-integer linear programming (MILP) problem and prove it to be NP-hard. We then propose a computation-efficient solution based on MILP. The high efficiency of our proposal is validated by extensive simulation based studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract—
After a decade of extensive research on application-specific wireless sensor networks (WSNs), the recent development of information and communication technologies makes it practical to realize the software-defined sensor networks (SDSNs), which are able to adapt to various application requirements and to fully explore the resources of WSNs. A sensor node in SDSN is able to conduct multiple tasks with different sensing targets simultaneously. A given sensing task usually involves multiple sensors to achieve a certain quality-of-sensing, e.g., coverage ratio. It is significant to design an energy-efficient sensor scheduling and management strategy with guaranteed quality-of-sensing for all tasks. To this end, three issues are investigated in this paper: 1) the subset of sensor nodes that shall be activated, i.e., sensor activation, 2) the task that each sensor node shall be assigned, i.e., task mapping, and 3) the sampling rate on a sensor for a target, i.e., sensing scheduling. They are jointly considered and formulated as a mixed-integer with quadratic constraints programming (MIQP) problem, which is then reformulated into a mixed-integer linear programming (MILP) formulation with low computation complexity via linearization. To deal with dynamic events such as sensor node participation and departure, during SDSN operations, an efficient online algorithm using local optimization is developed. Simulation results show that our proposed online algorithm approaches the globally optimized network energy efficiency with much lower rescheduling time and control overhead.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper uses the finite element upper and lower bound limit analysis methods to investigate the three-dimensional (3D) slope stability of two-layered undrained clay slopes. The solutions obtained from the slope stability analyses are bracketed to within ±10% or better. For comparison purposes, results from two-dimensional (2D) analyses based on the numerical limit analysis methods and the conventional limit equilibrium method (LEM) are also discussed. This study shows that 3D boundary of a slope can have significant effects on the slope stability. In addition, the results are presented in the form of stability charts which can be convenient tools for practicing engineers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mobile virtualization has emerged fairly recently and is considered a valuable way to mitigate security risks on Android devices. However, major challenges in mobile virtualization include runtime, hardware, resource overhead, and compatibility. In this paper, we propose a lightweight Android virtualization solution named Condroid, which is based on container technology. Condroid utilizes resource isolation based on namespaces feature and resource control based on cgroups feature. By leveraging them, Condroid can host multiple independent Android virtual machines on a single kernel to support mutilple Android containers. Furthermore, our implementation presents both a system service sharing mechanism to reduce memory utilization and a filesystem sharing mechanism to reduce storage usage. The evaluation results on Google Nexus 5 demonstrate that Condroid is feasible in terms of runtime, hardware resource overhead, and compatibility. Therefore, we find that Condroid has a higher performance than other virtualization solutions.