11 resultados para File systems

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel file systems offer improved performance over traditional file systems by allowing processes to access file data, stored over a number of disks, in parallel. When this file is to be accessed by many processes at the same time for reading and writing, the problem of maintaining file consistency is introduced. The general approach to maintaining file consistency is through the use locks which grant a single process exclusive access to the file. Existing studies have found parallel file systems and associated applications rarely experience conflict, resulting in unnecessary overheads from locks being acquired. This paper examines an alternate approach to maintaining file consistency, which is based on optimistic concurrency control. This approach was shown to have a much smaller overhead when compared with traditional lock-based approaches; especially in situations when there is little contention. This paper presents the design, implementation and initial testing results of an optimistic based concurrency control mechanism.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We hereby develop an effective and efficient testing methodology for correctness testing for file recovery tools across different file systems. We assume that the tool tester is familiar with the formats of common file types and has the ability to use the tools correctly. Our methodology first derives a testing plan to minimize the number of runs required to identify the differences in tools with respect to correctness. We also present a case study on correctness testing for file carving tools, which allows us to confirm that the number of necessary testing runs is bounded and our results are statistically sound.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The existing sharable file searching methods have at least one of the following disadvantages: (1) they are applicable only to certain topology patterns, (2) suffer single point failure, or (3) incur prohibitive maintenance cost. These drawbacks prevent their effective application in unstructured Peer-to-peer (P2P) systems (where the system topologies are changed time to time due to peers' frequently entering and leaving the systems), despite the considerable success of sharing file search in conventional peer-to-peer systems. Motivated by this, we develop several fully dynamic algorithms for searching sharing files in unstructured peer to peer systems. Our solutions can handle any topology pattern with small search time and computational overhead. We also present an in-depth analysis that provides valuable insight into the characteristics of alternative effective search strategies and leads to precision guarantees. Extensive experiments validate our theoretical findings and demonstrate the efficiency of our techniques in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years there has been a remarkable increase in information exchange between organizations due to changes in market structures and new forms of business relationships. The increase in the volume of business-to-business (B2B) transactions has contributed significantly to the expanding need for electronic systems that could effectively support communication between collaborating organizations. Examples of such collaborating systems include those that offer various types of business-to-business services, e.g. electronic commerce, electronic procurement systems, electronic links between legacy systems, or outsourced systems providing data processing services via electronic media. Development and running of B2B electronic systems has not been problem free. One of the most intractable issues found in B2B systems is the prevalence of inter-organisational conflict reported to exist and persists between the participants of interorganisational electronic networks. There have been very few attempts, however, to prescribe any practical method of detecting the antecedents of such conflict early in B2B development to facilitate smooth construction and the subsequent operation of B2B services. The research reported in this paper focuses on the identification and analysis of antecedent conflict in a joint process involving different organizations in a B2B venture. The proposed method involves identification of domain stakeholders, capturing and packaging their views and concerns into a reusable form, and the application of captured domain experience in B2B systems development. The concepts and methods introduced in this paper have been illustrated with examples drawn from our study of six web-enabled payroll systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fraud is one of the besetting evils of our time. While less dramatic than crimes of violence like murder or rape, fraud can inflict significant damage at organizational or individual level.

Fraud is a concept that seems to have an obvious meaning until we try to define it. As fraud exists in many different guises, and it is necessary to carefully define what it is and to tailor policies and initiatives accordingly.

Developing a definition of fraud is an early step of a prevention program. In order to be involved in the protection function, people at all levels of an organization must be knowledgeable about fraud. In this paper, we discuss the risk of fraud from an information systems perspective, explain what fraud is and present a range of definitions of fraud and computer fraud. We argue that without clearly defining fraud, organizations will not be able to share information that has the same meaning to everyone, to agree on how to measure the problem, and to know the extent of the problem, in order to decide how much and where to deploy resources to effectively solve it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study a reliable downloading algorithm for BitTorrent-like systems, and attest it in mathematics. BitTorrent-like systems have become immensely popular peer-to-peer file distribution tools in the internet in recent years. We analyze them in theory and point out some of their limitations especially in reliability, and propose an algorithm to resolve these problems by using the redundant copies in neighbors in P2P networks and can further optimize the downloading speed in some condition. Our preliminary simulations show that the proposed reliable algorithm works well; the improved BitTorrent-like systems are very stable and reliable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems and technologies used for unauthorised file sharing have received little attention in the Information Systems literature. This paper attempts to fill this gap by presenting a critical, qualitative study on the motivations for using unauthorised file sharing systems. Based on 30 interviews with music consumers, musicians, and the music industry, this paper reports on the decision of music consumers to ‘pirate or purchase’. This paper highlights file sharing from multiple perspectives of users, musicians, and representatives from the music recording industry. Three main themes emerged on the cost, convenience and choice as motivators for unauthorised file sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-to-peer (P2P) networks are gaining increased attention from both the scientific community and the larger Internet user community. Data retrieval algorithms lie at the center of P2P networks, and this paper addresses the problem of efficiently searching for files in unstructured P2P systems. We propose an Improved Adaptive Probabilistic Search (IAPS) algorithm that is fully distributed and bandwidth efficient. IAPS uses ant-colony optimization and takes file types into consideration in order to search for file container nodes with a high probability of success. We have performed extensive simulations to study the performance of IAPS, and we compare it with the Random Walk and Adaptive Probabilistic Search algorithms. Our experimental results show that IAPS achieves high success rates, high response rates, and significant message reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, deduplication system improves storage utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud. Aiming to address the above security challenges, this paper makes the first attempt to formalize the notion of distributed reliable deduplication system. We propose new distributed deduplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of data confidentiality and tag consistency are also achieved by introducing a deterministic secret sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication systems. Security analysis demonstrates that our deduplication systems are secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments.