820 resultados para Android, Java, mobile, database, PHP, App, Json, design pattern, compatibilità, UML


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increased interest in using broadcast disks to support mobile access to real-time databases. However, previous work has only considered the design of real-time immutable broadcast disks, the contents of which do not change over time. This paper considers the design of programs for real-time mutable broadcast disks - broadcast disks whose contents are occasionally updated. Recent scheduling-theoretic results relating to pinwheel scheduling and pfair scheduling are used to design algorithms for the efficient generation of real-time mutable broadcast disk programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transport protocols are an integral part of the inter-process communication (IPC) service used by application processes to communicate over the network infrastructure. With almost 30 years of research on transport, one would have hoped that we have a good handle on the problem. Unfortunately, that is not true. As the Internet continues to grow, new network technologies and new applications continue to emerge putting transport protocols in a never-ending flux as they are continuously adapted for these new environments. In this work, we propose a clean-slate transport architecture that renders all possible transport solutions as simply combinations of policies instantiated on a single common structure. We identify a minimal set of mechanisms that once instantiated with the appropriate policies allows any transport solution to be realized. Given our proposed architecture, we contend that there are no more transport protocols to design—only policies to specify. We implement our transport architecture in a declarative language, Network Datalog (NDlog), making the specification of different transport policies easy, compact, reusable, dynamically configurable and potentially verifiable. In NDlog, transport state is represented as database relations, state is updated/queried using database operations, and transport policies are specified using declarative rules. We identify limitations with NDlog that could potentially threaten the correctness of our specification. We propose several language extensions to NDlog that would significantly improve the programmability of transport policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Java programming language has been widely described as secure by design. Nevertheless, a number of serious security vulnerabilities have been discovered in Java, particularly in the component known as the Bytecode Verifier. This paper describes a method for representing Java security constraints using the Alloy modeling language. It further describes a system for performing a security analysis on any block of Java bytecodes by converting the bytes into relation initializers in Alloy. Any counterexamples found by the Alloy analyzer correspond directly to insecure code. Analysis of a real-world malicious applet is given to demonstrate the efficacy of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Java programming language has been widely described as secure by design. Nevertheless, a number of serious security vulnerabilities have been discovered in Java, particularly in the Bytecode Verifier, a critical component used to verify class semantics before loading is complete. This paper describes a method for representing Java security constraints using the Alloy modeling language. It further describes a system for performing a security analysis on any block of Java bytecodes by converting the bytes into relation initializers in Alloy. Any counterexamples found by the Alloy analyzer correspond directly to insecure code. Analysis of the approach in the context of known security exploits is provided. This type of analysis represents a significant departure from standard malware analysis methods based on signatures or anomaly detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pervasive use of mobile technologies has provided new opportunities for organisations to achieve competitive advantage by using a value network of partners to create value for multiple users. The delivery of a mobile payment (m-payment) system is an example of a value network as it requires the collaboration of multiple partners from diverse industries, each bringing their own expertise, motivations and expectations. Consequently, managing partnerships has been identified as a core competence required by organisations to form viable partnerships in an m-payment value network and an important factor in determining the sustainability of an m-payment business model. However, there is evidence that organisations lack this competence which has been witnessed in the m-payment domain where it has been attributed as an influencing factor in a number of failed m-payment initiatives since 2000. In response to this organisational deficiency, this research project leverages the use of design thinking and visualisation tools to enhance communication and understanding between managers who are responsible for managing partnerships within the m-payment domain. By adopting a design science research approach, which is a problem solving paradigm, the research builds and evaluates a visualisation tool in the form of a Partnership Management Canvas. In doing so, this study demonstrates that when organisations encourage their managers to adopt design thinking, as a way to balance their analytical thinking and intuitive thinking, communication and understanding between the partners increases. This can lead to a shared understanding and a shared commitment between the partners. In addition, the research identifies a number of key business model design issues that need to be considered by researchers and practitioners when designing an m-payment business model. As an applied research project, the study makes valuable contributions to the knowledge base and to the practice of management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The incidence and epidemiology of invasive fungal infections (IFIs), a leading cause of death among hematopoeitic stem cell transplant (HSCT) recipients, are derived mainly from single-institution retrospective studies. METHODS: The Transplant Associated Infections Surveillance Network, a network of 23 US transplant centers, prospectively enrolled HSCT recipients with proven and probable IFIs occurring between March 2001 and March 2006. We collected denominator data on all HSCTs preformed at each site and clinical, diagnostic, and outcome information for each IFI case. To estimate trends in IFI, we calculated the 12-month cumulative incidence among 9 sequential subcohorts. RESULTS: We identified 983 IFIs among 875 HSCT recipients. The median age of the patients was 49 years; 60% were male. Invasive aspergillosis (43%), invasive candidiasis (28%), and zygomycosis (8%) were the most common IFIs. Fifty-nine percent and 61% of IFIs were recognized within 60 days of neutropenia and graft-versus-host disease, respectively. Median onset of candidiasis and aspergillosis after HSCT was 61 days and 99 days, respectively. Within a cohort of 16,200 HSCT recipients who received their first transplants between March 2001 and September 2005 and were followed up through March 2006, we identified 718 IFIs in 639 persons. Twelve-month cumulative incidences, based on the first IFI, were 7.7 cases per 100 transplants for matched unrelated allogeneic, 8.1 cases per 100 transplants for mismatched-related allogeneic, 5.8 cases per 100 transplants for matched-related allogeneic, and 1.2 cases per 100 transplants for autologous HSCT. CONCLUSIONS: In this national prospective surveillance study of IFIs in HSCT recipients, the cumulative incidence was highest for aspergillosis, followed by candidiasis. Understanding the epidemiologic trends and burden of IFIs may lead to improved management strategies and study design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes the design of an efficient and robust genetic algorithm for the nuclear fuel loading problem (i.e., refuellings: the in-core fuel management problem) - a complex combinatorial, multimodal optimisation., Evolutionary computation as performed by FUELGEN replaces heuristic search of the kind performed by the FUELCON expert system (CAI 12/4), to solve the same problem. In contrast to the traditional genetic algorithm which makes strong requirements on the representation used and its parameter setting in order to be efficient, the results of recent research results on new, robust genetic algorithms show that representations unsuitable for the traditional genetic algorithm can still be used to good effect with little parameter adjustment. The representation presented here is a simple symbolic one with no linkage attributes, making the genetic algorithm particularly easy to apply to fuel loading problems with differing core structures and assembly inventories. A nonlinear fitness function has been constructed to direct the search efficiently in the presence of the many local optima that result from the constraint on solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an industrial application of case-based reasoning in engineering. The application involves an integration of case-based reasoning (CBR) retrieval techniques with a relational database. The database is specially designed as a repository of experiential knowledge and with the CBR application in mind such as to include qualitative search indices. The application is for an intelligent assistant for design and material engineers in the submarine cable industry. The system consists of three components; a material classifier and a database of experiential knowledge and a CBR system is used to retrieve similar past cases based on component descriptions. Work has shown that an uncommon retrieval technique, hierarchical searching, well represents several search indices and that this techniques aids the implementation of advanced techniques such as context sensitive weights. The system is currently undergoing user testing at the Alcatel Submarine Cables site in Greenwich. Plans are for wider testing and deployment over several sites internationally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Review of: Janis, C.M., Scott, K.M., & Jacobs, L.L. (eds.) 1998. Evolution of Tertiary Mammals of North America Volume 1: Terrestrial Carnivores, Ungulates, and Ungulatelike Mammals, i-x, 1491. Cambridge University Press, Cambridge, UK. £165

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the approach to the modelling of experiential knowledge in an industrial application of Case-Based Reasoning (CBR). The CBR involves retrieval techniques in conjunction with a relational database. The database is especially designed as a repository of experiential knowledge, and includes qualitative search indices. The system is intended to help design engineers and material engineers in the submarine cable industry. It consists of three parts: a materials database; a database of experiential knowledge; and a CBR system used to retrieve similar past designs based upon component and material qualitative descriptions. The system is currently undergoing user testing at the Alcatel Submarine Networks site in Greenwich.