943 resultados para engineering, electrical


Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a leading framework for processing and analyzing big data, MapReduce is leveraged by many enterprises to parallelize their data processing on distributed computing systems. Unfortunately, the all-to-all data forwarding from map tasks to reduce tasks in the traditional MapReduce framework would generate a large amount of network traffic. The fact that the intermediate data generated by map tasks can be combined with significant traffic reduction in many applications motivates us to propose a data aggregation scheme for MapReduce jobs in cloud. Specifically, we design an aggregation architecture under the existing MapReduce framework with the objective of minimizing the data traffic during the shuffle phase, in which aggregators can reside anywhere in the cloud. Some experimental results also show that our proposal outperforms existing work by reducing the network traffic significantly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the explosion of big data, processing large numbers of continuous data streams, i.e., big data stream processing (BDSP), has become a crucial requirement for many scientific and industrial applications in recent years. By offering a pool of computation, communication and storage resources, public clouds, like Amazon's EC2, are undoubtedly the most efficient platforms to meet the ever-growing needs of BDSP. Public cloud service providers usually operate a number of geo-distributed datacenters across the globe. Different datacenter pairs are with different inter-datacenter network costs charged by Internet Service Providers (ISPs). While, inter-datacenter traffic in BDSP constitutes a large portion of a cloud provider's traffic demand over the Internet and incurs substantial communication cost, which may even become the dominant operational expenditure factor. As the datacenter resources are provided in a virtualized way, the virtual machines (VMs) for stream processing tasks can be freely deployed onto any datacenters, provided that the Service Level Agreement (SLA, e.g., quality-of-information) is obeyed. This raises the opportunity, but also a challenge, to explore the inter-datacenter network cost diversities to optimize both VM placement and load balancing towards network cost minimization with guaranteed SLA. In this paper, we first propose a general modeling framework that describes all representative inter-task relationship semantics in BDSP. Based on our novel framework, we then formulate the communication cost minimization problem for BDSP into a mixed-integer linear programming (MILP) problem and prove it to be NP-hard. We then propose a computation-efficient solution based on MILP. The high efficiency of our proposal is validated by extensive simulation based studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we investigate the channel estimation problem for multiple-input multiple-output (MIMO) relay communication systems with time-varying channels. The time-varying characteristic of the channels is described by the complex-exponential basis expansion model (CE-BEM). We propose a superimposed channel training algorithm to estimate the individual first-hop and second-hop time-varying channel matrices for MIMO relay systems. In particular, the estimation of the second-hop time-varying channel matrix is performed by exploiting the superimposed training sequence at the relay node, while the first-hop time-varying channel matrix is estimated through the source node training sequence and the estimated second-hop channel. To improve the performance of channel estimation, we derive the optimal structure of the source and relay training sequences that minimize the mean-squared error (MSE) of channel estimation. We also optimize the relay amplification factor that governs the power allocation between the source and relay training sequences. Numerical simulations demonstrate that the proposed superimposed channel training algorithm for MIMO relay systems with time-varying channels outperforms the conventional two-stage channel estimation scheme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2002-2012 IEEE. In this paper, we investigate the channel estimation problem for two-way multiple-input multiple-output (MIMO) relay communication systems in frequency-selective fading environments. We apply the method of superimposed channel training to estimate the individual channel state information (CSI) of the first-hop and second-hop links for two-way MIMO relay systems with frequency-selective fading channels. In this algorithm, a relay training sequence is superimposed on the received signals at the relay node to assist the estimation of the second-hop channel matrices. The optimal structure of the source and relay training sequences is derived to minimize the mean-squared error (MSE) of channel estimation. Moreover, the optimal power allocation between the source and relay training sequences is derived to improve the performance of channel estimation. Numerical examples are shown to demonstrate the performance of the proposed superimposed channel training algorithm for two-way MIMO relay systems in frequency-selective fading environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract&mdash;<br />After a decade of extensive research on application-specific wireless sensor networks (WSNs), the recent development of information and communication technologies makes it practical to realize the software-defined sensor networks (SDSNs), which are able to adapt to various application requirements and to fully explore the resources of WSNs. A sensor node in SDSN is able to conduct multiple tasks with different sensing targets simultaneously. A given sensing task usually involves multiple sensors to achieve a certain quality-of-sensing, e.g., coverage ratio. It is significant to design an energy-efficient sensor scheduling and management strategy with guaranteed quality-of-sensing for all tasks. To this end, three issues are investigated in this paper: 1) the subset of sensor nodes that shall be activated, i.e., sensor activation, 2) the task that each sensor node shall be assigned, i.e., task mapping, and 3) the sampling rate on a sensor for a target, i.e., sensing scheduling. They are jointly considered and formulated as a mixed-integer with quadratic constraints programming (MIQP) problem, which is then reformulated into a mixed-integer linear programming (MILP) formulation with low computation complexity via linearization. To deal with dynamic events such as sensor node participation and departure, during SDSN operations, an efficient online algorithm using local optimization is developed. Simulation results show that our proposed online algorithm approaches the globally optimized network energy efficiency with much lower rescheduling time and control overhead.<br />

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract&mdash; Audio watermarking is a promising technology for copyright protection of audio data. Built upon the concept of spread spectrum (SS), many SS-based audio watermarking method shave been developed, where a pseudonoise (PN) sequence is usually used to introduce security. A major drawback of the existing SS-based audio watermarking methods is their low embedding capacity. In this paper, we propose a new SS-based audio watermarking method which possesses much higher embedding capacity while ensuring satisfactory imperceptibility and robustness. The high embedding capacity is achieved through a set of mechanisms: embedding multiple watermark bits in one audio segment, reducing host signal interference on watermark extraction, and adaptively adjusting PN sequence amplitude in watermark embedding based on the property of audio segments. The effectiveness of the proposed audio watermarking method is demonstrated by simulation examples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mobile virtualization has emerged fairly recently and is considered a valuable way to mitigate security risks on Android devices. However, major challenges in mobile virtualization include runtime, hardware, resource overhead, and compatibility. In this paper, we propose a lightweight Android virtualization solution named Condroid, which is based on container technology. Condroid utilizes resource isolation based on namespaces feature and resource control based on cgroups feature. By leveraging them, Condroid can host multiple independent Android virtual machines on a single kernel to support mutilple Android containers. Furthermore, our implementation presents both a system service sharing mechanism to reduce memory utilization and a filesystem sharing mechanism to reduce storage usage. The evaluation results on Google Nexus 5 demonstrate that Condroid is feasible in terms of runtime, hardware resource overhead, and compatibility. Therefore, we find that Condroid has a higher performance than other virtualization solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Certain tasks in image processing require the preservation of fine image details, while applying a broad operation to the image, such as image reduction, filtering, or smoothing. In such cases, the objects of interest are typically represented by small, spatially cohesive clusters of pixels which are to be preserved or removed, depending on the requirements. When images are corrupted by the noise or contain intensity variations generated by imaging sensors, identification of these clusters within the intensity space is problematic as they are corrupted by outliers. This paper presents a novel approach to accounting for spatial organization of the pixels and to measuring the compactness of pixel clusters based on the construction of fuzzy measures with specific properties: monotonicity with respect to the cluster size; invariance with respect to translation, reflection, and rotation; and discrimination between pixel sets of fixed cardinality with different spatial arrangements. We present construction methods based on Sugeno-type fuzzy measures, minimum spanning trees, and fuzzy measure decomposition. We demonstrate their application to generating fuzzy measures on real and artificial images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual notations are a key aspect of visual languages. They provide a direct mapping between the intended information and set of graphical symbols. Visual notations are most often implemented using the low level syntax of programming languages which is time consuming, error prone, difficult to maintain and hardly human-centric. In this paper we describe an alternative approach to generating visual notations using by-example model transformations. In our new approach, a semantic mapping between model and view is implemented using model transformations. The notations resulting from this approach can be reused by mapping varieties of input data to their model and can be composed into different visualizations. Our approach is implemented in the CONVErT framework and has been applied to many visualization examples. Three case studies for visualizing statistical charts, visualization of traffic data, and reuse of a Minard's map visualization's components, are presented in this paper. A detailed user study of our approach for reusing notations and generating visualizations has been provided. 80% of the participants in this user study agreed that the novel approach to visualization was easy and 87% stated that they quickly learned to use the tool support.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Performance is a crucial attribute for most software, making performance analysis an important software engineering task. The difficulty is that modern applications are challenging to analyse for performance. Many profiling techniques used in real-world software development struggle to provide useful results when applied to large-scale object-oriented applications. There is a substantial body of research into software performance generally but currently there exists no survey of this research that would help identify approaches useful for object-oriented software. To provide such a review we performed a systematic mapping study of empirical performance analysis approaches that are applicable to object-oriented software. Using keyword searches against leading software engineering research databases and manual searches of relevant venues we identified over 5,000 related articles published since January 2000. From these we systematically selected 253 applicable articles and categorised them according to ten facets that capture the intent, implementation and evaluation of the approaches. Our mapping study results allow us to highlight the main contributions of the existing literature and identify areas where there are interesting opportunities. We also find that, despite the research including approaches specifically aimed at object-oriented software, there are significant challenges in providing actionable feedback on the performance of large-scale object-oriented applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamically changing background (dynamic background) still presents a great challenge to many motion-based video surveillance systems. In the context of event detection, it is a major source of false alarms. There is a strong need from the security industry either to detect and suppress these false alarms, or dampen the effects of background changes, so as to increase the sensitivity to meaningful events of interest. In this paper, we restrict our focus to one of the most common causes of dynamic background changes: 1) that of swaying tree branches and 2) their shadows under windy conditions. Considering the ultimate goal in a video analytics pipeline, we formulate a new dynamic background detection problem as a signal processing alternative to the previously described but unreliable computer vision-based approaches. Within this new framework, we directly reduce the number of false alarms by testing if the detected events are due to characteristic background motions. In addition, we introduce a new data set suitable for the evaluation of dynamic background detection. It consists of real-world events detected by a commercial surveillance system from two static surveillance cameras. The research question we address is whether dynamic background can be detected reliably and efficiently using simple motion features and in the presence of similar but meaningful events, such as loitering. Inspired by the tree aerodynamics theory, we propose a novel method named local variation persistence (LVP), that captures the key characteristics of swaying motions. The method is posed as a convex optimization problem, whose variable is the local variation. We derive a computationally efficient algorithm for solving the optimization problem, the solution of which is then used to form a powerful detection statistic. On our newly collected data set, we demonstrate that the proposed LVP achieves excellent detection results and outperforms the best alternative adapted from existing art in the dynamic background literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Critics have emerged in recent times as a specific tool feature to support users in computer-mediated tasks. These computer-supported critics provide proactive guidelines or suggestions for improvement to designs, code, and other digital artifacts. The concept of a critic has been adopted in various domains, including medical, programming, software engineering, design sketching, and others. Critics have been shown to be an effective mechanism for providing feedback to users. We propose a new critic taxonomy based on extensive review of the critic literature. The groups and elements of our critic taxonomy are presented and explained collectively with examples, including the mapping of 13 existing critic tools, predominantly for software engineering and programming education tasks to the taxonomy. We believe this critic taxonomy will assist others in identifying, categorizing, developing, and deploying computer-supported critics in a range of domains.<br />

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Domain-specific visual languages support high-level modeling for a wide range of application domains. However, building tools to support such languages is very challenging. We describe a set of key conceptual requirements for such tools and our approach to addressing these requirements, a set of visual language-based metatools. These support definition of metamodels, visual notations, views, modeling behaviors, design critics, and model transformations and provide a platform to realize target visual modeling tools. Extensions support collaborative work, human-centric tool interaction, and multiplatform deployment. We illustrate application of the metatoolset on tools developed with our approach. We describe tool developer and cognitive evaluations of our platform and our exemplar tools, and summarize key future research directions.<br />

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The need to estimate a particular quantile of a distribution is an important problem that frequently arises in many computer vision and signal processing applications. For example, our work was motivated by the requirements of many semiautomatic surveillance analytics systems that detect abnormalities in close-circuit television footage using statistical models of low-level motion features. In this paper, we specifically address the problem of estimating the running quantile of a data stream when the memory for storing observations is limited. We make the following several major contributions: 1) we highlight the limitations of approaches previously described in the literature that make them unsuitable for nonstationary streams; 2) we describe a novel principle for the utilization of the available storage space; 3) we introduce two novel algorithms that exploit the proposed principle in different ways; and 4) we present a comprehensive evaluation and analysis of the proposed algorithms and the existing methods in the literature on both synthetic data sets and three large real-world streams acquired in the course of operation of an existing commercial surveillance system. Our findings convincingly demonstrate that both of the proposed methods are highly successful and vastly outperform the existing alternatives. We show that the better of the two algorithms (data-aligned histogram) exhibits far superior performance in comparison with the previously described methods, achieving more than 10 times lower estimate errors on real-world data, even when its available working memory is an order of magnitude smaller.