6 resultados para Software design

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bilinear pairings can be used to construct cryptographic systems with very desirable properties. A pairing performs a mapping on members of groups on elliptic and genus 2 hyperelliptic curves to an extension of the finite field on which the curves are defined. The finite fields must, however, be large to ensure adequate security. The complicated group structure of the curves and the expensive field operations result in time consuming computations that are an impediment to the practicality of pairing-based systems. The Tate pairing can be computed efficiently using the ɳT method. Hardware architectures can be used to accelerate the required operations by exploiting the parallelism inherent to the algorithmic and finite field calculations. The Tate pairing can be performed on elliptic curves of characteristic 2 and 3 and on genus 2 hyperelliptic curves of characteristic 2. Curve selection is dependent on several factors including desired computational speed, the area constraints of the target device and the required security level. In this thesis, custom hardware processors for the acceleration of the Tate pairing are presented and implemented on an FPGA. The underlying hardware architectures are designed with care to exploit available parallelism while ensuring resource efficiency. The characteristic 2 elliptic curve processor contains novel units that return a pairing result in a very low number of clock cycles. Despite the more complicated computational algorithm, the speed of the genus 2 processor is comparable. Pairing computation on each of these curves can be appealing in applications with various attributes. A flexible processor that can perform pairing computation on elliptic curves of characteristic 2 and 3 has also been designed. An integrated hardware/software design and verification environment has been developed. This system automates the procedures required for robust processor creation and enables the rapid provision of solutions for a wide range of cryptographic applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A growing number of software development projects successfully exhibit a mix of agile and traditional software development methodologies. Many of these mixed methodologies are organization specific and tailored to a specific project. Our objective in this research-in-progress paper is to develop an artifact that can guide the development of such a mixed methodology. Using control theory, we design a process model that provides theoretical guidance to build a portfolio of controls that can support the development of a mixed methodology for software development. Controls, embedded in methods, provide a generalizable and adaptable framework for project managers to develop their mixed methodology specific to the demands of the project. A research methodology is proposed to test the model. Finally, future directions and contributions are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Computer-Aided-Design (CAD) and Computer-Aided-Manufacture (CAM) has been developed to fabricate fixed dental restorations accurately, faster and improve cost effectiveness of manufacture when compared to the conventional method. Two main methods exist in dental CAD/CAM technology: the subtractive and additive methods. While fitting accuracy of both methods has been explored, no study yet has compared the fabricated restoration (CAM output) to its CAD in terms of accuracy. The aim of this present study was to compare the output of various dental CAM routes to a sole initial CAD and establish the accuracy of fabrication. The internal fit of the various CAM routes were also investigated. The null hypotheses tested were: 1) no significant differences observed between the CAM output to the CAD and 2) no significant differences observed between the various CAM routes. Methods: An aluminium master model of a standard premolar preparation was scanned with a contact dental scanner (Incise, Renishaw, UK). A single CAD was created on the scanned master model (InciseCAD software, V2.5.0.140, UK). Twenty copings were then fabricated by sending the single CAD to a multitude of CAM routes. The copings were grouped (n=5) as: Laser sintered CoCrMo (LS), 5-axis milled CoCrMo (MCoCrMo), 3-axis milled zirconia (ZAx3) and 4-axis milled zirconia (ZAx4). All copings were micro-CT scanned (Phoenix X-Ray, Nanotom-S, Germany, power: 155kV, current: 60µA, 3600 projections) to produce 3-Dimensional (3D) models. A novel methodology was created to superimpose the micro-CT scans with the CAD (GOM Inspect software, V7.5SR2, Germany) to indicate inaccuracies in manufacturing. The accuracy in terms of coping volume was explored. The distances from the surfaces of the micro-CT 3D models to the surfaces of the CAD model (CAD Deviation) were investigated after creating surface colour deviation maps. Localised digital sections of the deviations (Occlusal, Axial and Cervical) and selected focussed areas were then quantitatively measured using software (GOM Inspect software, Germany). A novel methodology was also explored to digitally align (Rhino software, V5, USA) the micro-CT scans with the master model to investigate internal fit. Fifty digital cross sections of the aligned scans were created. Point-to-point distances were measured at 5 levels at each cross section. The five levels were: Vertical Marginal Fit (VF), Absolute Marginal Fit (AM), Axio-margin Fit (AMF), Axial Fit (AF) and Occlusal Fit (OF). Results: The results of the volume measurement were summarised as: VM-CoCrMo (62.8mm3 ) > VZax3 (59.4mm3 ) > VCAD (57mm3 ) > VZax4 (56.1mm3 ) > VLS (52.5mm3 ) and were all significantly different (p presented as areas with different colour. No significant differences were observed at the internal aspect of the cervical aspect between all groups of copings. Significant differences (p< M-CoCrMo Internal Occlusal, Internal Axial and External Axial 2 ZAx3 > ZAx4 External Occlusal, External Cervical 3 ZAx3 < ZAx4 Internal Occlusal 4 M-CoCrMo > ZAx4 Internal Occlusal and Internal Axial The mean values of AMF and AF were significantly (p M-CoCrMo and CAD > ZAx4. Only VF of M-CoCrMo was comparable with the CAD Internal Fit. All VF and AM values were within the clinically acceptable fit (120µm). Conclusion: The investigated CAM methods reproduced the CAD accurately at the internal cervical aspect of the copings. However, localised deviations at axial and occlusal aspects of the copings may suggest the need for modifications in these areas prior to fitting and veneering with porcelain. The CAM groups evaluated also showed different levels of Internal Fit thus rejecting the null hypotheses. The novel non-destructive methodologies for CAD/CAM accuracy and internal fit testing presented in this thesis may be a useful evaluation tool for similar applications.