69 resultados para Methods engineering.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Privacy preserving on data mining and data release has attracted an increasing research interest over a number of decades. Differential privacy is one influential privacy notion that offers a rigorous and provable privacy guarantee for data mining and data release. Existing studies on differential privacy assume that in a data set, records are sampled independently. However, in real-world applications, records in a data set are rarely independent. The relationships among records are referred to as correlated information and the data set is defined as correlated data set. A differential privacy technique performed on a correlated data set will disclose more information than expected, and this is a serious privacy violation. Although recent research was concerned with this new privacy violation, it still calls for a solid solution for the correlated data set. Moreover, how to decrease the large amount of noise incurred via differential privacy in correlated data set is yet to be explored. To fill the gap, this paper proposes an effective correlated differential privacy solution by defining the correlated sensitivity and designing a correlated data releasing mechanism. With consideration of the correlated levels between records, the proposed correlated sensitivity can significantly decrease the noise compared with traditional global sensitivity. The correlated data releasing mechanism correlated iteration mechanism is designed based on an iterative method to answer a large number of queries. Compared with the traditional method, the proposed correlated differential privacy solution enhances the privacy guarantee for a correlated data set with less accuracy cost. Experimental results show that the proposed solution outperforms traditional differential privacy in terms of mean square error on large group of queries. This also suggests the correlated differential privacy can successfully retain the utility while preserving the privacy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing is becoming popular as the next infrastructure of computing platform. Despite the promising model and hype surrounding, security has become the major concern that people hesitate to transfer their applications to clouds. Concretely, cloud platform is under numerous attacks. As a result, it is definitely expected to establish a firewall to protect cloud from these attacks. However, setting up a centralized firewall for a whole cloud data center is infeasible from both performance and financial aspects. In this paper, we propose a decentralized cloud firewall framework for individual cloud customers. We investigate how to dynamically allocate resources to optimize resources provisioning cost, while satisfying QoS requirement specified by individual customers simultaneously. Moreover, we establish novel queuing theory based model M/Geo/1 and M/Geo/m for quantitative system analysis, where the service times follow a geometric distribution. By employing Z-transform and embedded Markov chain techniques, we obtain a closed-form expression of mean packet response time. Through extensive simulations and experiments, we conclude that an M/Geo/1 model reflects the cloud firewall real system much better than a traditional M/M/1 model. Our numerical results also indicate that we are able to set up cloud firewall with affordable cost to cloud customers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a fundamental tool for network management and security, traffic classification has attracted increasing attention in recent years. A significant challenge to the robustness of classification performance comes from zero-day applications previously unknown in traffic classification systems. In this paper, we propose a new scheme of Robust statistical Traffic Classification (RTC) by combining supervised and unsupervised machine learning techniques to meet this challenge. The proposed RTC scheme has the capability of identifying the traffic of zero-day applications as well as accurately discriminating predefined application classes. In addition, we develop a new method for automating the RTC scheme parameters optimization process. The empirical study on real-world traffic data confirms the effectiveness of the proposed scheme. When zero-day applications are present, the classification performance of the new scheme is significantly better than four state-of-the-art methods: random forest, correlation-based classification, semi-supervised clustering, and one-class SVM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wireless mesh networks are widely applied in many fields such as industrial controlling, environmental monitoring, and military operations. Network coding is promising technology that can improve the performance of wireless mesh networks. In particular, network coding is suitable for wireless mesh networks as the fixed backbone of wireless mesh is usually unlimited energy. However, coding collision is a severe problem affecting network performance. To avoid this, routing should be effectively designed with an optimum combination of coding opportunity and coding validity. In this paper, we propose a Connected Dominating Set (CDS)-based and Flow-oriented Coding-aware Routing (CFCR) mechanism to actively increase potential coding opportunities. Our work provides two major contributions. First, it effectively deals with the coding collision problem of flows by introducing the information conformation process, which effectively decreases the failure rate of decoding. Secondly, our routing process considers the benefit of CDS and flow coding simultaneously. Through formalized analysis of the routing parameters, CFCR can choose optimized routing with reliable transmission and small cost. Our evaluation shows CFCR has a lower packet loss ratio and higher throughput than existing methods, such as Adaptive Control of Packet Overhead in XOR Network Coding (ACPO), or Distributed Coding-Aware Routing (DCAR).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In cyber physical system (CPS), computational resources and physical resources are strongly correlated and mutually dependent. Cascading failures occur between coupled networks, cause the system more fragile than single network. Besides widely used metric giant component, we study small cluster (small component) in interdependent networks after cascading failures occur. We first introduce an overview on how small clusters distribute in various single networks. Then we propose a percolation theory based mathematical method to study how small clusters be affected by the interdependence between two coupled networks. We prove that the upper bounds exist for both the fraction and the number of operating small clusters. Without loss of generality, we apply both synthetic network and real network data in simulation to study small clusters under different interdependence models and network topologies. The extensive simulations highlight our findings: except the giant component, considerable proportion of small clusters exists, with the remaining part fragmenting to very tiny pieces or even massive isolated single vertex; no matter how the two networks are tightly coupled, an upper bound exists for the size of small clusters. We also discover that the interdependent small-world networks generally have the highest fractions of operating small clusters. Three attack strategies are compared: Inter Degree Priority Attack, Intra Degree Priority Attack and Random Attack. We observe that the fraction of functioning small clusters keeps stable and is independent from the attack strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the increasing popularity of utility-oriented computing where the resources are traded as services, efficient management of quality of service (QoS) has become increasingly significant to both service consumers and service providers. In the context of distributed multimedia content adaptation deployment on service-oriented computing, how to ensure the stringent QoS requirements of the content adaptation is a significant and immediate challenge. However, QoS guarantees in the distributed multimedia content adaptation deployment on service-oriented platform context have not been accorded the attention it deserves. In this paper, we address this problem. We formulate the SLA management for distributed multimedia content adaptation deployment on service-oriented computing as an integer programming problem. We propose an SLA management framework that enables the service provider to determine deliverable QoS before settling SLA with potential service consumers to optimize QoS guarantees. We analyzed the performance of the proposed strategy under various conditions in terms of the SLA success rate, rejection rate and impact of the resource data errors on potential violation of the agreed upon SLA. We also compared the proposed SLA management framework with a baseline approach in which the distributed multimedia content adaptation is deployed on a service-oriented platform without SLA consideration. The results of the experiments show that the proposed SLA management framework substantially outperforms the baseline approach confirming that SLA management is a core requirement for the deployment of distributed multimedia content adaptation on service-oriented systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Feature based camera model identification plays an important role for forensics investigations on images. The conventional feature based identification schemes suffer from the problem of unknown models, that is, some images are captured by the camera models previously unknown to the identification system. To address this problem, we propose a new scheme: Source Camera Identification with Unknown models (SCIU). It has the capability of identifying images of the unknown models as well as distinguishing images of the known models. The new SCIU scheme consists of three stages: 1) unknown detection; 2) unknown expansion; and 3) (K+1)-class classification. Unknown detection applies a k-nearest neighbours method to recognize a few sample images of unknown models from the unlabeled images. Unknown expansion further extends the set of unknown sample images using a self-training strategy. Then, we address a specific (K+1)-class classification, in which the sample images of unknown (1-class) and known models (K-class) are combined to train a classifier. In addition, we develop a parameter optimization method for unknown detection, and investigate the stopping criterion for unknown expansion. The experiments carried out on the Dresden image collection confirm the effectiveness of the proposed SCIU scheme. When unknown models present, the identification accuracy of SCIU is significantly better than the four state-of-art methods: 1) multi-class Support Vector Machine (SVM); 2) binary SVM; 3) combined classification framework; and 4) decision boundary carving.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the issues for tour planning applications is to adaptively provide personalized advices for different types of tourists and tour activities. This paper proposes a high level Petri Nets based approach to providing some level of adaptation by implementing adaptive navigation in a tour node space. The new model supports dynamic reordering or removal of tour nodes along a tour path; it supports multiple travel modes and incorporates multimodality within its tour planning logic to derive adaptive tour. Examples are given to demonstrate how to realize adaptive interfaces and personalization. Future directions are also discussed at the end of this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sensor networks are a branch of distributed ad hoc networks with a broad range of applications in surveillance and environment monitoring. In these networks, message exchanges are carried out in a multi-hop manner. Due to resource constraints, security professionals often use lightweight protocols, which do not provide adequate security. Even in the absence of constraints, designing a foolproof set of protocols and codes is almost impossible. This leaves the door open to the worms that take advantage of the vulnerabilities to propagate via exploiting the multi-hop message exchange mechanism. This issue has drawn the attention of security researchers recently. In this paper, we investigate the propagation pattern of information in wireless sensor networks based on an extended theory of epidemiology. We develop a geographical susceptible-infective model for this purpose and analytically derive the dynamics of information propagation. Compared with the previous models, ours is more realistic and is distinguished by two key factors that had been neglected before: 1) the proposed model does not purely rely on epidemic theory but rather binds it with geometrical and spatial constraints of real-world sensor networks and 2) it extends to also model the spread dynamics of conflicting information (e.g., a worm and its patch). We do extensive simulations to show the accuracy of our model and compare it with the previous ones. The findings show the common intuition that the infection source is the best location to start patching from, which is not necessarily right. We show that this depends on many factors, including the time it takes for the patch to be developed, worm/patch characteristics as well as the shape of the network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications without infrastructure investment, where large application data sets can be stored in the cloud. Based on the pay-as-you-go model, storage strategies and benchmarking approaches have been developed for cost-effectively storing large volume of generated application data sets in the cloud. However, they are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this paper, toward achieving the minimum cost benchmark, we propose a novel highly cost-effective and practical storage strategy that can automatically decide whether a generated data set should be stored or not at runtime in the cloud. The main focus of this strategy is the local-optimization for the tradeoff between computation and storage, while secondarily also taking users' (optional) preferences on storage into consideration. Both theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon's cost model show that the cost-effectiveness of our strategy is close to or even the same as the minimum cost benchmark, and the efficiency is very high for practical runtime utilization in the cloud.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter overviews the existing methods of requirements analysis as prescribed by some of the best-known web-development methods. It also discusses the pre-eminent importance of stakeholder analysis, identification of stakeholder views and concerns, and the processes governing elicitation of web systems requirements. The chapter finally derives a model of concern-driven requirements evolution from several case studies undertaken in the area of web-enabled employee service systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deakin University in Australia is one of the leading providers of distance education in the South Pacific region. The School of Engineering offers four-year professional engineering-degree programs and three-year technologist programs. The over 600 total students studying engineering at Deakin fall into four categories:

• 18-19 year-old students fresh from high school, who largely study on-campus,
• older students in the technical workforce, seeking a university degree to upgrade their qualifications,
• industry-based students studying in university-industry partnership programs,
• overseas students studying either on-campus, or off-campus through education partners in Malaysia and Singapore.

Geographically these students form a very wide student base. The study programs are designed to produce multi-skilled, broadly focused engineers and technologists with multi-disciplinary technical competence, and the ability to take a systems approach to design and operational performance. A team of around 25 academic staff deliver courses in seven different majors in the general fields of manufacturing, environmental engineering, mechatronics, and computer systems. We discuss here the history of the School, its teaching philosophy, and its unique methods in delivering engineering education to a widely scattered student body.