800 resultados para Cipher Computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the research is to investigate the emerging data security methodologies that will work with most suitable applications in the academic, industrial and commercial environments. Of several methodologies considered for Advanced Encryption Standard (AES), MARS (block cipher) developed by IBM, has been selected. Its design takes advantage of the powerful capabilities of modern computers to allow a much higher level of performance than can be obtained from less optimized algorithms such as Data Encryption Standards (DES). MARS is unique in combining virtually every design technique known to cryptographers in one algorithm. The thesis presents the performance of 128-bit cipher flexibility, which is a scaled down version of the algorithm MARS. The cryptosystem used showed equally comparable performance in speed, flexibility and security, with that of the original algorithm. The algorithm is considered to be very secure and robust and is expected to be implemented for most of the applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il termine pervasive computing incarna l’idea di andare oltre il paradigma dei personal computers: è l’idea che qualsiasi device possa essere tecnologizzato ed interconnesso con un network distribuito, costituendo un nuovo modello di interazione uomo-macchina. All’interno di questo paradigma gioca un ruolo fondamentale il concetto di context-awareness, che fa riferimento all’idea che i computer possano raccogliere dati dall’ambiente circostante e reagire in maniera intelligente e proattiva basandosi su di essi. Un sistema siffatto necessita da un lato di una infrastruttura per la raccolta dei dati dall’ambiente, dall'altro di un supporto per la componente intelligente e reattiva. In tale scenario, questa tesi ha l'obiettivo di progettare e realizzare una libreria per l'interfacciamento di un sistema distribuito di sensori Java-based con l’interprete tuProlog, un sistema Prolog leggero e configurabile, scritto anch'esso in Java ma disponibile per una pluralità di piattaforme, in modo da porre le basi per la costruzione di sistemi context-aware in questo ambiente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questa tesi è incentrata sulla revisione del classico modello di infrastruttura Cloud. Le motivazioni sono da ricercare nelle condizioni operative reali della maggior parte dei dispositivi connessi alla rete attualmente. Si parla di ambiente ostile riferendosi a network popolate da molti dispositivi dalle limitate caratteristiche tecniche e spesso collegati con canali radio, molto più instabili delle connessioni cablate. Allo scenario va ad aggiungersi la necessità crescente di mobilità che limita ulteriormente i vantaggi derivanti dall'utilizzo dell’infrastruttura Cloud originale. La trattazione propone il modello Edge come estensione del Cloud. Esso ne amplia il ventaglio di utilizzo, favorendo aree di applicazione che stanno acquisendo maggiore influenza negli ultimi periodi e che richiedono una revisione delle vecchie infrastrutture Cloud, dettata dalle caratteristiche stringenti che necessitano per un'operatività soddisfacente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgments The financial support of the part of this research by The Royal Society, The Royal Academy of Engineering and The Carnegie Trust for the Universities of Scotland is gratefully acknowledged.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgments The financial support of the part of this research by The Royal Society, The Royal Academy of Engineering and The Carnegie Trust for the Universities of Scotland is gratefully acknowledged.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgements The work of Klaus Nordhausen was supported by the Academy of Finland (grant 268703). Oleksii Pokotylo is supported by the Cologne Graduate School of Management, Economics and Social Sciences. The work of Daniel Vogel was supported by the DFG collaborate research grant SFB 823

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed Computing frameworks belong to a class of programming models that allow developers to

launch workloads on large clusters of machines. Due to the dramatic increase in the volume of

data gathered by ubiquitous computing devices, data analytic workloads have become a common

case among distributed computing applications, making Data Science an entire field of

Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,

a sequence of operations they wish to apply on this dataset, and some constraint they may have

related to their work (performances, QoS, budget, etc). However, it is actually extremely

difficult, without domain expertise, to perform data science. One need to select the right amount

and type of resources, pick up a framework, and configure it. Also, users are often running their

application in shared environments, ruled by schedulers expecting them to specify precisely their resource

needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and

profiling are hard, high dimensional problems that block users from making the right

configuration choices and determining the right amount of resources they need. Paradoxically, the

system is gathering a large amount of monitoring data at runtime, which remains unused.

In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit

monitoring data to learn about workloads, and process user requests into a tailored execution

context. In this work, we study different techniques that have been used to make steps toward

such system awareness, and explore a new way to do so by implementing machine learning

techniques to recommend a specific subset of system configurations for Apache Spark applications.

Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight

the complexity in choosing the best one for a given workload.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Salutogenesis is now accepted as a part of the contemporary model of disease: an individual is not only affected by pathogenic factors in the environment, but those that promote well-being or salutogenesis. Given that "environment" extends to include the built environment, promotion of salutogenesis has become part of the architectural brief for contemporary healthcare facilities, drawing on an increasing evidence-base. Salutogenesis is inextricably linked with the notion of person-environment "fit". MyRoom is a proposal for an integrated architectural and pervasive computing model, which enhances psychosocial congruence by using real-time data indicative of the individual's physical status to enable the environment of his/her room (colour, light, temperature) to adapt on an on-going basis in response to bio-signals. This work is part of the PRTLI-IV funded programme NEMBES, investigating the use of embedded technologies in the built environment. Different care contexts require variations in the model, and iterative prototyping investigating use in different contexts will progressively lead to the development of a fully-integrated adaptive salutogenic single-room prototype.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atomic ions trapped in micro-fabricated surface traps can be utilized as a physical platform with which to build a quantum computer. They possess many of the desirable qualities of such a device, including high fidelity state preparation and readout, universal logic gates, long coherence times, and can be readily entangled with each other through photonic interconnects. The use of optical cavities integrated with trapped ion qubits as a photonic interface presents the possibility for order of magnitude improvements in performance in several key areas of their use in quantum computation. The first part of this thesis describes the design and fabrication of a novel surface trap for integration with an optical cavity. The trap is custom made on a highly reflective mirror surface and includes the capability of moving the ion trap location along all three trap axes with nanometer scale precision. The second part of this thesis demonstrates the suitability of small micro-cavities formed from laser ablated fused silica substrates with radii of curvature in the 300-500 micron range for use with the mirror trap as part of an integrated ion trap cavity system. Quantum computing applications for such a system include dramatic improvements in the photonic entanglement rate up to 10 kHz, the qubit measurement time down to 1 microsecond, and the measurement error rates down to the 10e-5 range. The final part of this thesis details a performance simulator for exploring the physical resource requirements and performance demands to scale such a quantum computer to sizes capable of performing quantum algorithms beyond the limits of classical computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy-efficient computing remains a critical challenge across the wide range of future data-processing engines — from ultra-low-power embedded systems to servers, mainframes, and supercomputers. In addition, the advent of cloud and mobile computing as well as the explosion of IoT technologies have created new research challenges in the already complex, multidimensional space of modern and future computer systems. These new research challenges led to the establishment of the IEEE Rebooting Computing Initiative, which specifically addresses novel low-power solutions and technologies as one of the main areas of concern.With this in mind, we thought it timely to survey the state of the art of energy-efficient computing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many cloud-based applications employ a data centre as a central server to process data that is generated by edge devices, such as smartphones, tablets and wearables. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effect on Quality-of-Service and Experience. The concept of Edge Computing is predicated on moving some of this computational load towards the edge of the network to harness computational capabilities that are currently untapped in edge nodes, such as base stations, routers and switches. This position paper considers the challenges and opportunities that arise out of this new direction in the computing landscape.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08