6 resultados para Error in essence

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ecosystem goods and services provided by estuarine and near coastal regions are being increasingly recognised for their immense value, as is the biodiversity in these areas and these near coastal communities have been identified as sentinels of climate change also. Population structure and reproductive biology of two bivalve molluscs, Cerastoderma edule and, Mytilus edulis were assessed at two study sites over a 16-month study period. Following an anomalously harsh winter, advancement of spawning time was observed in both species. Throughout Ireland and Europe the cockle has experienced mass surfacings in geographically distinct regions, and a concurrent study of cockles was undertaken to explore this phenomenon. Surfaced and buried cockles were collected on a monthly basis and their health compared. Age was highlighted as a source of variation between dying and healthy animals with a parasite threshold being reached possibly around age three. Local factors dominated when looking at the cause of surfacing at each site. The health of mussels was explored too on a temporal and seasonal basis in an attempt to assess what constitutes a healthy organism. In essence external drivers can tip the balance between “acceptable” levels of infection where the mussel can still function physiologically and “unacceptable” where prevalence and intensity of infection can result in physiological impairment at the individual and population level. Synecological studies of intertidal ecosystems are lacking, so all bivalves encountered during the sampling were assessed in terms of population structure, reproduction, and health. It became clear, that some parasites might specialize on one host species while others are not so specific in host choice. Furthermore the population genetics of the cockle, its parasite Meiogymnophallus minutus, and its hyperparasite Unikaryon legeri were examined too. A small nucleotide polymorphism was detected upon comparison of Ireland and Morocco.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Popular medieval English romances were composed and received within the social consciousness of a distinctly patriarchal culture. This study examines the way in which the dynamic of these texts is significantly influenced by the consequences of female endeavour, in the context of an autonomous feminine presence in both the real and imagined worlds of medieval England, and the authority with which this is presented in various narratives, with a particular focus on Sir Thomas Malory’s Morte Darthur. Chapter One of this study establishes the social and economic positioning of the female in fifteenth-century England, and her capacity for literary engagement; I will then apply this model of female autonomy and authority to a wider discussion of texts contemporary with Malory in Chapters Two and Three, in anticipation of a more detailed study of Le Morte Darthur in Chapters Four and Five. My research explores the female presence and influence in these texts according to certain types: namely the lover, the victim, the ruler, and the temptress. In the case of Malory, the crux of my observations centres on the paradox of the capacity for power in perceived vulnerability, incorporating the presentation of women in this patriarchal culture as being vulnerable and in need of protection, while simultaneously acting as a significant threat to chivalric society by manipulating this apparent fragility, to the detriment of the chivalric knight. In this sense, women can be perceived as being an architect of the romance world, while simultaneously acting as its saboteur. In essence, this study offers an innovative interpretation of female autonomy and authority in medieval romance, presenting an exploration of the physical, intellectual, and emotional placement of women in both the historical and literary worlds of fifteenth-century England, while examining the implications of female conduct on Malory’s Arthurian society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).