12 resultados para Computer Hardware.
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
A computer model has been developed to optimize the performance of a 50kWp photovoltaic system which supplies electrical energy to a dairy farm at Fota Island in Cork Harbour. Optimization of the system involves maximising the efficiency and increasing the performance and reliability of each hardware unit. The model accepts horizontal insolation, ambient temperature, wind speed, wind direction and load demand as inputs. An optimization program uses the computer model to simulate the optimum operating conditions. From this analysis, criteria are established which are used to improve the photovoltaic system operation. This thesis describes the model concepts, the model implementation and the model verification procedures used during development. It also describes the techniques which are used during system optimization. The software, which is written in FORTRAN, is structured in modular units to provide logical and efficient programming. These modular units may also be used in the modelling and optimization of other photovoltaic systems.
Resumo:
With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.
Resumo:
Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..
Resumo:
This work considers the effect of hardware constraints that typically arise in practical power-aware wireless sensor network systems. A rigorous methodology is presented that quantifies the effect of output power limit and quantization constraints on bit error rate performance. The approach uses a novel, intuitively appealing means of addressing the output power constraint, wherein the attendant saturation block is mapped from the output of the plant to its input and compensation is then achieved using a robust anti-windup scheme. A priori levels of system performance are attained using a quantitative feedback theory approach on the initial, linear stage of the design paradigm. This hybrid design is assessed experimentally using a fully compliant 802.15.4 testbed where mobility is introduced through the use of autonomous robots. A benchmark comparison between the new approach and a number of existing strategies is also presented.
Resumo:
Science Foundation Ireland (07/CE/11147); Irish Research Council for Science Engineering and Technology (Embark Initiative)
Resumo:
Accepted Version
Resumo:
Buried heat sources can be investigated by examining thermal infrared images and comparing these with the results of theoretical models which predict the thermal anomaly a given heat source may generate. Key factors influencing surface temperature include the geometry and temperature of the heat source, the surface meteorological environment, and the thermal conductivity and anisotropy of the rock. In general, a geothermal heat flux of greater than 2% of solar insolation is required to produce a detectable thermal anomaly in a thermal infrared image. A heat source of, for example, 2-300K greater than the average surface temperature must be a t depth shallower than 50m for the detection of the anomaly in a thermal infrared image, for typical terrestrial conditions. Atmospheric factors are of critical importance. While the mean atmospheric temperature has little significance, the convection is a dominant factor, and can act to swamp the thermal signature entirely. Given a steady state heat source that produces a detectable thermal anomaly, it is possible to loosely constrain the physical properties of the heat source and surrounding rock, using the surface thermal anomaly as a basis. The success of this technique is highly dependent on the degree to which the physical properties of the host rock are known. Important parameters include the surface thermal properties and thermal conductivity of the rock. Modelling of transient thermal situations was carried out, to assess the effect of time dependant thermal fluxes. One-dimensional finite element models can be readily and accurately applied to the investigation of diurnal heat flow, as with thermal inertia models. Diurnal thermal models of environments on Earth, the Moon and Mars were carried out using finite elements and found to be consistent with published measurements. The heat flow from an injection of hot lava into a near surface lava tube was considered. While this approach was useful for study, and long term monitoring in inhospitable areas, it was found to have little hazard warning utility, as the time taken for the thermal energy to propagate to the surface in dry rock (several months) in very long. The resolution of the thermal infrared imaging system is an important factor. Presently available satellite based systems such as Landsat (resolution of 120m) are inadequate for detailed study of geothermal anomalies. Airborne systems, such as TIMS (variable resolution of 3-6m) are much more useful for discriminating small buried heat sources. Planned improvements in the resolution of satellite based systems will broaden the potential for application of the techniques developed in this thesis. It is important to note, however, that adequate spatial resolution is a necessary but not sufficient condition for successful application of these techniques.
Resumo:
The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems.
Resumo:
The topic of this thesis is impulsivity. The meaning and measurement of impulse control is explored, with a particular focus on forensic settings. Impulsivity is central to many areas of psychology; it is one of the most common diagnostic criteria of mental disorders and is fundamental to the understanding of forensic personalities. Despite this widespread importance there is little agreement as to the definition or structure of impulsivity, and its measurement is fraught with difficulty owing to a reliance on self-report methods. This research aims to address this problem by investigating the viability of using simple computerised cognitive performance tasks as complementary components of a multi-method assessment strategy for impulse control. Ultimately, the usefulness of this measurement strategy for a forensic sample is assessed. Impulsivity is found to be a multifaceted construct comprised of a constellation of distinct sub-dimensions. Computerised cognitive performance tasks are valid and reliable measures that can assess impulsivity at a neuronal level. Self-report and performance task methods assess distinct components of impulse control and, for the optimal assessment of impulse control, a multi-method battery of self-report and performance task measures is advocated. Such a battery is shown to have demonstrated utility in a forensic sample, and recommendations for forensic assessment in the Irish context are discussed.
Resumo:
Traditionally, attacks on cryptographic algorithms looked for mathematical weaknesses in the underlying structure of a cipher. Side-channel attacks, however, look to extract secret key information based on the leakage from the device on which the cipher is implemented, be it smart-card, microprocessor, dedicated hardware or personal computer. Attacks based on the power consumption, electromagnetic emanations and execution time have all been practically demonstrated on a range of devices to reveal partial secret-key information from which the full key can be reconstructed. The focus of this thesis is power analysis, more specifically a class of attacks known as profiling attacks. These attacks assume a potential attacker has access to, or can control, an identical device to that which is under attack, which allows him to profile the power consumption of operations or data flow during encryption. This assumes a stronger adversary than traditional non-profiling attacks such as differential or correlation power analysis, however the ability to model a device allows templates to be used post-profiling to extract key information from many different target devices using the power consumption of very few encryptions. This allows an adversary to overcome protocols intended to prevent secret key recovery by restricting the number of available traces. In this thesis a detailed investigation of template attacks is conducted, along with how the selection of various attack parameters practically affect the efficiency of the secret key recovery, as well as examining the underlying assumption of profiling attacks in that the power consumption of one device can be used to extract secret keys from another. Trace only attacks, where the corresponding plaintext or ciphertext data is unavailable, are then investigated against both symmetric and asymmetric algorithms with the goal of key recovery from a single trace. This allows an adversary to bypass many of the currently proposed countermeasures, particularly in the asymmetric domain. An investigation into machine-learning methods for side-channel analysis as an alternative to template or stochastic methods is also conducted, with support vector machines, logistic regression and neural networks investigated from a side-channel viewpoint. Both binary and multi-class classification attack scenarios are examined in order to explore the relative strengths of each algorithm. Finally these machine-learning based alternatives are empirically compared with template attacks, with their respective merits examined with regards to attack efficiency.
Resumo:
The retrofitting of existing buildings for decreased energy usage, through increased energy efficiency and for minimum carbon dioxide emissions throughout their remaining lifetime is a major area of research. This research area requires development to provide building professionals with more efficient building retrofit solution determination tools. The overarching objective of this research is to develop a tool for this purpose through the implementation of a prescribed methodology. This has been achieved in three distinct steps. Firstly, the concept of using the degree-days modelling method as an adequate means of basing retrofit decision upon was analysed and the results illustrated that the concept had merit. Secondly, the concept of combining the degree-days modelling method and the Genetic Algorithms optimisation method is investigated as a method of determining optimal thermal energy retrofit solutions. Thirdly, the combination of the degree-days modelling method and the Genetic Algorithms optimisation method were packaged into a building retrofit decision-support tool and named BRaSS (Building Retrofit Support Software). The results demonstrate clearly that, fundamental building information, simplified occupancy profiles and weather data used in a static simulation modelling method is a sufficient and adequate means to base retrofitting decisions upon. The results also show that basing retrofit decisions upon energy analysis results are the best means to guide a retrofit project and also to achieve results which are optimum for a particular building. The results also indicate that the building retrofit decision-support tool, BRaSS, is an effective method to determine optimum thermal energy retrofit solutions.
Resumo:
The influence of communication technology on group decision-making has been examined in many studies. But the findings are inconsistent. Some studies showed a positive effect on decision quality, other studies have shown that communication technology makes the decision even worse. One possible explanation for these different findings could be the use of different Group Decision Support Systems (GDSS) in these studies, with some GDSS better fitting to the given task than others and with different sets of functions. This paper outlines an approach with an information system solely designed to examine the effect of (1) anonymity, (2) voting and (3) blind picking on decision quality, discussion quality and perceived quality of information.