88 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Outsourcing heavy computational tasks to remote cloud server, which accordingly significantly reduce the computational burden at the end hosts, represents an effective and practical approach towards extensive and scalable mobile applications and has drawn increasing attention in recent years. However, due to the limited processing power of the end hosts yet the keen privacy concerns on the outsourced data, it is vital to ensure both the efficiency and security of the outsourcing computation in the cloud computing. In this paper, we address the issue by developing a publicly verifiable outsourcing computation proposal. In particular, considering a large amount of applications of matrix multiplication in large datasets and image processing, we propose a publicly verifiable outsourcing computation scheme for matrix multiplication in the amortized model. Security analysis demonstrates that the proposed scheme is provable secure by blinding input and output in a simple way. By comparing the developed scheme with existing proposals, we show that our proposal is more efficient in terms of functionality, as well as the computation, communication and storage overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid cloud is a widely used cloud architecture in large companies that can outsource data to the publiccloud, while still supporting various clients like mobile devices. However, such public cloud data outsourcing raises serious security concerns, such as how to preserve data confidentiality and how to regulate access policies to the data stored in public cloud. To address this issue, we design a hybrid cloud architecture that supports data sharing securely and efficiently, even with resource-limited devices, where private cloud serves as a gateway between the public cloud and the data user. Under such architecture, we propose an improved construction of attribute-based encryption that has the capability of delegating encryption/decryption computation, which achieves flexible access control in the cloud and privacy-preserving in datautilization even with mobile devices. Extensive experiments show the scheme can further decrease the computational cost and space overhead at the user side, which is quite efficient for the user with limited mobile devices. In the process of delegating most of the encryption/decryption computation to private cloud, the user can not disclose any information to the private cloud. We also consider the communication securitythat once frequent attribute revocation happens, our scheme is able to resist some attacks between private cloud and data user by employing anonymous key agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recommendation systems adopt various techniques to recommend ranked lists of items to help users in identifying items that fit their personal tastes best. Among various recommendation algorithms, user and item-based collaborative filtering methods have been very successful in both industry and academia. More recently, the rapid growth of the Internet and E-commerce applications results in great challenges for recommendation systems as the number of users and the amount of available online information have been growing too fast. These challenges include performing high quality recommendations per second for millions of users and items, achieving high coverage under the circumstance of data sparsity and increasing the scalability of recommendation systems. To obtain higher quality recommendations under the circumstance of data sparsity, in this paper, we propose a novel method to compute the similarity of different users based on the side information which is beyond user-item rating information from various online recommendation and review sites. Furthermore, we take the special interests of users into consideration and combine three types of information (users, items, user-items) to predict the ratings of items. Then FUIR, a novel recommendation algorithm which fuses user and item information, is proposed to generate recommendation results for target users. We evaluate our proposed FUIR algorithm on three data sets and the experimental results demonstrate that our FUIR algorithm is effective against sparse rating data and can produce higher quality recommendations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many scientific workflows are data intensive where large volumes of intermediate data are generated during their execution. Some valuable intermediate data need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science in the cloud has become popular nowadays, more intermediate data can be stored in scientific cloud workflows based on a pay-for-use model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenance in scientific workflows. With the IDG, deleted intermediate data can be regenerated, and as such we develop a novel intermediate data storage strategy that can reduce the cost of scientific cloud workflow systems by automatically storing appropriate intermediate data sets with one cloud service provider. The strategy has significant research merits, i.e. it achieves a cost-effective trade-off of computation cost and storage cost and is not strongly impacted by the forecasting inaccuracy of data sets' usages. Meanwhile, the strategy also takes the users' tolerance of data accessing delay into consideration. We utilize Amazon's cost model and apply the strategy to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that our strategy can reduce the overall cost of scientific cloud workflow execution significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The notion of privacy takes on a completely different meaning when viewed from the perspective of an IT professional, an organisation using technology to support strategic directions or a member of the public. This paper looks past the technical issues involved in data protection and examines some of the business, social and regulatory aspects that have become important to those involved in the management, storage and dissemination of electronic information. The paper documents some of the legislative developments in privacy and data protection and examines what these developments mean for IT professionals for whom the link between data captured, stored and processed into information and the resulting effect on privacy is important. The Commonwealth Privacy Act 1988 based on work done by the Council of Europe, the OECD and the European Union provides some general guidelines but only for the public sector. However, new legislation imminent. Thus, IT professionals need to be aware of the changing situation and examine their organisation’s current practices to ensure compliance with future laws.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides an analysis of student experiences of an approach to teaching theory that integrates the teaching of theory and data analysis. The argument that supports this approach is that theory is most effectively taught by using empirical data in order to generate and test propositions and hypotheses, thereby emphasising the dialectic relationship between theory and data through experiential learning. Bachelor of Commerce students in two second-year substantive organisational theory subjects were introduced to this method of learning at a large, multi-campus Australian university. In this paper, we present a model that posits a relationship between students' perceptions of their learning, the enjoyment of the experience and expected future outcomes. The results of our evaluation reveal that a majority of students:

•enjoyed this way of learning;
•believed that the exercise assisted their learning of substantive theory, computing applications and the nature of survey data; and
•felt that what they have learned could be applied elsewhere.

We argue that this approach presents the potential to improve the way theory is taught by integrating theory, theory testing and theory development; moving away from teaching theory and analysis in discrete subjects; and, introducing iterative experiences in substantive subjects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we have demonstrated how the existing programming environments, tools and middleware could be used for the study of execution performance of parallel and sequential applications on a non-dedicated cluster. A set of parallel and sequential benchmark applications selected for and used in the experiments were characterized, and experiment requirements shown. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In data-intensive distributed systems, replication is the most widely used approach to offer high data availability, low bandwidth consumption, increased fault-tolerance and improved scalability of the overall system. Replication-based systems implement replica control protocols that enforce a specified semantics of accessing the data. Also, the performance depends on a number of factors, the chief of which is the protocol used to maintain consistency among object replica. In this paper, we propose a new low-cost and high data availability protocol called the box-shaped grid structure for maintaining consistency of replicated data on networked distributed computing systems. We show that the proposed protocol provides high data availability, low communication costs, and increased fault-tolerance as compared to the baseline replica control protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With more and more multimedia applications on the Internet, such as IPTV, bandwidth becomes a vital bottleneck for the booming of large scale Internet based multimedia applications. Network coding is recently proposed to take advantage to use network bandwidth efficiently. In this paper, we focus on massive multimedia data, e.g. IPTV programs, transportation in peer-to-peer networks with network coding. By through study of networking coding, we pointed out that the prerequisites of bandwidth saving of network coding are: 1) one information source with a number of concurrent receivers, or 2) information pieces cached at intermediate nodes. We further proof that network coding can not gain bandwidth saving at immediate connections to a receiver end; As a result, we propose a novel model for IPTV data transportation in unstructured peer-to-peer networks with network coding. Our preliminary simulations show that the proposed architecture works very well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For most data stream applications, the volume of data is too huge to be stored in permanent devices or to be thoroughly scanned more than once. It is hence recognized that approximate answers are usually sufficient, where a good approximation obtained in a timely manner is often better than the exact answer that is delayed beyond the window of opportunity. Unfortunately, this is not the case for mining frequent patterns over data streams where algorithms capable of online processing data streams do not conform strictly to a precise error guarantee. Since the quality of approximate answers is as important as their timely delivery, it is necessary to design algorithms to meet both criteria at the same time. In this paper, we propose an algorithm that allows online processing of streaming data and yet guaranteeing the support error of frequent patterns strictly within a user-specified threshold. Our theoretical and experimental studies show that our algorithm is an effective and reliable method for finding frequent sets in data stream environments when both constraints need to be satisfied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computers of a non-dedicated cluster are often idle (users attend meetings, have lunch or coffee breaks) or lightly loaded (users carry out simple computations to support problem solving activities). These underutilised computers can be employed to execute parallel applications. Thus, these computers can be shared by parallel and sequential applications, which could lead to the improvement of their execution performance. However, there is a lack of experimental study showing the applications’ performance and the system utilization of executing parallel and sequential applications concurrently and concurrent execution of multiple parallel applications on a non-dedicated cluster. Here we present the result of an experimental study into load balancing based scheduling of mixtures of NAS Parallel Benchmarks and BYTE sequential applications on a very low cost non-dedicated cluster. This study showed that the proposed sharing provided performance boost as compared to the execution of the parallel load in isolation on a reduced number of computers and better cluster utilization. The results of this research were used not only to validate other researchers’ result generated by simulation but also to support our research mission of widening the use of non-dedicated clusters. Our promising results obtained could promote further research studies to convince universities, business and industry, which require a large amount of computing resources, to run parallel applications on their already owned non-dedicated clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE--The purpose of this study was to assess the effectiveness of a low-resource-intensive lifestyle modification program incorporating resistance training and to compare a gymnasium-based with a home-based resistance training program on diabetes diagnosis status and risk.

RESEARCH DESIGN AND METHODS--A quasi-experimental two-group study was undertaken with 122 participants with diabetes risk factors; 36.9% had impaired glucose tolerance (1GT) or impaired fasting glucose (IFG) at baseline. The intervention included a 6-week group self-management education program, a gymnasium-based or home-based 12-week resistance training program, and a 34-week maintenance program. Fasting plasma glucose (FPG) and 2-h plasma glucose, blood lipids, blood pressure, body composition, physical activity, and diet were assessed at baseline and week 52.

RESULTS--Mean 2-h plasma glucose and FPG fell by 0.34 mmol/1 (95% CI--0.60 to--0.08) and 0.15 mmol/l (-0.23 to -0.07), respectively. The proportion of participants with IFG or IGT decreased from 36.9 to 23.0% (P = 0.006). Mean weight loss was 4.07 kg (-4.99 to -3.15). The only significant difference between resistance training groups was a greater reduction in systolic blood pressure for the gymnasium-based group (P = 0.008).

CONCLUSIONS--This intervention significantly improved diabetes diagnostic status and reduced diabetes risk to a degree comparable to that of other low-resource-intensive lifestyle modification programs and more intensive interventions applied to individuals with IGT. The effects of home-based and gymnasium-based resistance training did not differ significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The staging model suggests that early stages of bipolar disorder respond better to treatments and have a more favourable prognosis. This study aims to provide empirical support for the model, and the allied construct of early intervention.

Methods: Pooled data from mania, depression, and maintenance studies of olanzapine were analyzed. Individuals were categorized as having had 0, 1–5, 6–10, or >10 prior episodes of illness, and data were analyzed across these groups.

Results: Response rates for the mania and maintenance studies ranged from 52–69% and 10–50%, respectively, for individuals with 1–5 previous episodes, and from 29–59% and 11–40% for individuals with >5 previous episodes. These rates were significantly higher for the 1–5 group on most measures of response with up to a twofold increase in the chance of responding for those with fewer previous episodes. For the depression studies, response rates were significantly higher for the 1–5 group for two measures only. In the maintenance studies, the chance of relapse to either mania or depression was reduced by 40–60% for those who had experienced 1–5 episodes or 6–10 episodes compared to the >10 episode group, respectively. This trend was statistically significant only for relapse into mania for the 1–5 episode group (p = 0.005).

Conclusion: Those individuals at the earliest stages of illness consistently had a more favourable response to treatment. This is consistent with the staging model and

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Account covers research dating from the early 1960s in the field of low-melting molten salts and hydrates,which has recently become popular under the rubric of “ionic liquids”. It covers understanding gained in the principal author’s laboratories (initially in Australia, but mostly in the U.S.A.) from spectroscopic, dynamic, and thermodynamic studies and includes recent applications of this understanding in the fields of energy conversion and biopreservation. Both protic and aprotic varieties of ionic liquids are included, but recent studies have focused on the protic class because of the special applications made possible by the highly variable proton activities available in these liquids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses a major challenge in data mining applications where the full information about the underlying processes, such as sensor networks or large online database, cannot be practically obtained due to physical limitations such as low bandwidth or memory, storage, or computing power. Motivated by the recent theory on direct information sampling called compressed sensing (CS), we propose a framework for detecting anomalies from these largescale data mining applications where the full information is not practically possible to obtain. Exploiting the fact that the intrinsic dimension of the data in these applications are typically small relative to the raw dimension and the fact that compressed sensing is capable of capturing most information with few measurements, our work show that spectral methods that used for volume anomaly detection can be directly applied to the CS data with guarantee on performance. Our theoretical contributions are supported by extensive experimental results on large datasets which show satisfactory performance.