5 resultados para average gains
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
The purpose of this preliminary study is to identify signs of fatigue in specific muscle groups that in turn directly influence accuracy in professional darts. Electromyography (EMG) sensors are employed to monitor the electrical activity produced by skeletal muscles of the trunk and upper limb during throw. It is noted that the Flexor Pollicis Brevis muscle which controls the critical release action during throw shows signs of fatigue. This is accompanied by an inherent increase in mean integral EMG amplitude for a number of other throw related muscles indicating an attempt to maintain constant applied throwing force. A strong correlation is shown to exist between average score and decrease in mean integral ECG amplitude for the Flexor Pollicis Brevis.
Resumo:
Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.
Resumo:
This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.
Resumo:
This thesis investigates the optimisation of Coarse-Fine (CF) spectrum sensing architectures under a distribution of SNRs for Dynamic Spectrum Access (DSA). Three different detector architectures are investigated: the Coarse-Sorting Fine Detector (CSFD), the Coarse-Deciding Fine Detector (CDFD) and the Hybrid Coarse-Fine Detector (HCFD). To date, the majority of the work on coarse-fine spectrum sensing for cognitive radio has focused on a single value for the SNR. This approach overlooks the key advantage that CF sensing has to offer, namely that high powered signals can be easily detected without extra signal processing. By considering a range of SNR values, the detector can be optimised more effectively and greater performance gains realised. This work considers the optimisation of CF spectrum sensing schemes where the security and performance are treated separately. Instead of optimising system performance at a single, constant, low SNR value, the system instead is optimised for the average operating conditions. The security is still provided such that at the low SNR values the safety specifications are met. By decoupling the security and performance, the system’s average performance increases whilst maintaining the protection of licensed users from harmful interference. The different architectures considered in this thesis are investigated in theory, simulation and physical implementation to provide a complete overview of the performance of each system. This thesis provides a method for estimating SNR distributions which is quick, accurate and relatively low cost. The CSFD is modelled and the characteristic equations are found for the CDFD scheme. The HCFD is introduced and optimisation schemes for all three architectures are proposed. Finally, using the Implementing Radio In Software (IRIS) test-bed to confirm simulation results, CF spectrum sensing is shown to be significantly quicker than naive methods, whilst still meeting the required interference probability rates and not requiring substantial receiver complexity increases.
Resumo:
The International Energy Agency has repeatedly identified increased end-use energy efficiency as the quickest, least costly method of green house gas mitigation, most recently in the 2012 World Energy Outlook, and urges all governing bodies to increase efforts to promote energy efficiency policies and technologies. The residential sector is recognised as a major potential source of cost effective energy efficiency gains. Within the EU this relative importance can be seen from a review of the National Energy Efficiency Action Plans (NEEAP) submitted by member states, which in all cases place a large emphasis on the residential sector. This is particularly true for Ireland whose residential sector has historically had higher energy consumption and CO2 emissions than the EU average and whose first NEEAP targeted 44% of the energy savings to be achieved in 2020 from this sector. This thesis develops a bottom-up engineering archetype modelling approach to analyse the Irish residential sector and to estimate the technical energy savings potential of a number of policy measures. First, a model of space and water heating energy demand for new dwellings is built and used to estimate the technical energy savings potential due to the introduction of the 2008 and 2010 changes to part L of the building regulations governing energy efficiency in new dwellings. Next, the author makes use of a valuable new dataset of Building Energy Rating (BER) survey results to first characterise the highly heterogeneous stock of existing dwellings, and then to estimate the technical energy savings potential of an ambitious national retrofit programme targeting up to 1 million residential dwellings. This thesis also presents work carried out by the author as part of a collaboration to produce a bottom-up, multi-sector LEAP model for Ireland. Overall this work highlights the challenges faced in successfully implementing both sets of policy measures. It points to the wide potential range of final savings possible from particular policy measures and the resulting high degree of uncertainty as to whether particular targets will be met and identifies the key factors on which the success of these policies will depend. It makes recommendations on further modelling work and on the improvements necessary in the data available to researchers and policy makers alike in order to develop increasingly sophisticated residential energy demand models and better inform policy.