989 resultados para Emanuel, MIke
Resumo:
The GENESI project has the ambitious goal of bringing WSN technology to the level where it can provide the core of the next generation of systems for structural health monitoring that are long lasting, pervasive and totally distributed and autonomous. This goal requires embracing engineering and scientific challenges never successfully tackled before. Sensor nodes will be redesigned to overcome their current limitations, especially concerning energy storage and provisioning (we need devices with virtually infinite lifetime) and resilience to faults and interferences (for reliability and robustness). New software and protocols will be defined to fully take advantage of the new hardware, providing new paradigms for cross-layer interaction at all layers of the protocol stack and satisfying the requirements of a new concept of Quality of Service (QoS) that is application-driven, truly reflecting the end user perspective and expectations. The GENESI project will develop long lasting sensor nodes by combining cutting edge technologies for energy generation from the environment (energy harvesting) and green energy supply (small form factor fuel cells); GENESI will define models for energy harvesting, energy conservation in super-capacitors and supplemental energy availability through fuel cells, in addition to the design of new algorithms and protocols for dynamic allocation of sensing and communication tasks to the sensors. The project team will design communication protocols for large scale heterogeneous wireless sensor/actuator networks with energy-harvesting capabilities and define distributed mechanisms for context assessment and situation awareness. This paper presents an analysis of the GENESI system requirements in order to achieve the ambitious goals of the project. Extending from the requirements presented, the emergent system specification is discussed with respect to the selection and integration of relevant system components.The resulting integrated system will be evaluated and characterised to ensure that it is capable of satisfying the functional requirements of the project
Resumo:
Science Foundation Ireland (CSET - Centre for Science, Engineering and Technology, Grant No. 07/CE/11147)
Resumo:
Body Sensor Network (BSN) technology is seeing a rapid emergence in application areas such as health, fitness and sports monitoring. Current BSN wireless sensors typically operate on a single frequency band (e.g. utilizing the IEEE 802.15.4 standard that operates at 2.45GHz) employing a single radio transceiver for wireless communications. This allows a simple wireless architecture to be realized with low cost and power consumption. However, network congestion/failure can create potential issues in terms of reliability of data transfer, quality-of-service (QOS) and data throughput for the sensor. These issues can be especially critical in healthcare monitoring applications where data availability and integrity is crucial. The addition of more than one radio has the potential to address some of the above issues. For example, multi-radio implementations can allow access to more than one network, providing increased coverage and data processing as well as improved interoperability between networks. A small number of multi-radio wireless sensor solutions exist at present but require the use of more than one radio transceiver devices to achieve multi-band operation. This paper presents the design of a novel prototype multi-radio hardware platform that uses a single radio transceiver. The proposed design allows multi-band operation in the 433/868MHz ISM bands and this, together with its low complexity and small form factor, make it suitable for a wide range of BSN applications.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
The central research question that this thesis addresses is whether there is a significant gap between fishery stakeholder values and the principles and policy goals implicit in an Ecosystem Approach to Fisheries Management (EAFM). The implications of such a gap for fisheries governance are explored. Furthermore an assessment is made of what may be practically achievable in the implementation of an EAFM in fisheries in general and in a case study fishery in particular. The research was mainly focused on a particular case study, the Celtic Sea Herring fishery and its management committee, the Celtic Sea Herring Management Advisory Committee (CSHMAC). The Celtic Sea Herring fishery exhibits many aspects of an EAFM and the fish stock has successfully recovered to healthy levels in the past 5 years. However there are increasing levels of governance related conflict within the fishery which threaten the future sustainability of the stock. Previous research on EAFM governance has tended to focus either on higher levels of EAFM governance or on individual behaviour but very little research has attempted to link the two spheres or explore the relationship between them. Two main themes within this study aimed to address this gap. The first was what role governance could play in facilitating EAFM implementation. The second theme concerned the degree of convergence between high-level EAFM goals and stakeholder values. The first method applied was governance benchmarking to analyse systemic risks to EAFM implementation. This found that there are no real EU or national level policies which provide stakeholders or managers with clear targets for EAFM implementation. The second method applied was the use of cognitive mapping to explore stakeholders understandings of the main ecological, economic and institutional driving forces in the Celtic Sea Herring fishery. The main finding from this was that a long-term outlook can and has been incentivised through a combination of policy drivers and participatory management. However the fundamental principle of EAFM, accounting for ecosystem linkages rather than target stocks was not reflected in stakeholders cognitive maps. This was confirmed in a prioritisation of stakeholders management priorities using Analytic Hierarchy Process which found that the overriding concern is for protection of target stock status but that wider ecosystem health was not a priority for most management participants. The conclusion reached is that moving to sustainable fisheries may be a more complex process than envisioned in much of the literature and may consist of two phases. The first phase is a transition to a long-term but still target stock focused approach. This achievable transition is mainly a strategic change, which can be incentivised by policies and supported by stakeholders. In the Celtic Sea Herring fishery, and an increasing number of global and European fisheries, such transitions have contributed to successful stock recoveries. The second phase however, implementation of an ecosystem approach, may present a greater challenge in terms of governability, as this research highlights some fundamental conflicts between stakeholder perceptions and values and those inherent in an EAFM. This phase may involve the setting aside of fish for non-valued ecosystem elements and will require either a pronounced mind-set and value change or some strong top-down policy incentives in order to succeed. Fisheries governance frameworks will need to carefully explore the most effective balance between such endogenous and exogenous solutions. This finding of low prioritisation of wider ecosystem elements has implications for rights based management within an ecosystem approach, regardless of whether those rights are individual or collective.
Resumo:
Great demand in power optimized devices shows promising economic potential and draws lots of attention in industry and research area. Due to the continuously shrinking CMOS process, not only dynamic power but also static power has emerged as a big concern in power reduction. Other than power optimization, average-case power estimation is quite significant for power budget allocation but also challenging in terms of time and effort. In this thesis, we will introduce a methodology to support modular quantitative analysis in order to estimate average power of circuits, on the basis of two concepts named Random Bag Preserving and Linear Compositionality. It can shorten simulation time and sustain high accuracy, resulting in increasing the feasibility of power estimation of big systems. For power saving, firstly, we take advantages of the low power characteristic of adiabatic logic and asynchronous logic to achieve ultra-low dynamic and static power. We will propose two memory cells, which could run in adiabatic and non-adiabatic mode. About 90% dynamic power can be saved in adiabatic mode when compared to other up-to-date designs. About 90% leakage power is saved. Secondly, a novel logic, named Asynchronous Charge Sharing Logic (ACSL), will be introduced. The realization of completion detection is simplified considerably. Not just the power reduction improvement, ACSL brings another promising feature in average power estimation called data-independency where this characteristic would make power estimation effortless and be meaningful for modular quantitative average case analysis. Finally, a new asynchronous Arithmetic Logic Unit (ALU) with a ripple carry adder implemented using the logically reversible/bidirectional characteristic exhibiting ultra-low power dissipation with sub-threshold region operating point will be presented. The proposed adder is able to operate multi-functionally.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
This study explores the experiences of stress and burnout in Irish second level teachers and examines the contribution of a number of individual, environmental and health factors in burnout development. As no such study has previously been carried out with this sample, a mixed-methods approach was adopted in order to comprehensively investigate the subject matter. Teaching has consistently been identified as a particularly stressful occupation and research investigating its development is of great importance in developing measures to address the problem. The first phase of study involved the use of focus groups conducted with a total of 20 second-level teachers from 11 different schools in the greater Cork city area. Findings suggest that teachers experience a variety of stressors – in class, in the staff room and outside of school. The second phase of study employed a survey to examine the factors associated with burnout. Analysis of 192 responses suggested that burnout results from a combination of demographic, personality, environmental and coping factors. Burnout was also found to be associated with a number of physical symptoms, particularly trouble sleeping and fatigue. Findings suggest that interventions designed to reduce burnout must reflect the complexity of the problem and its development. Based on the research findings, interventions that combine individual and organisational approaches should provide the optimal chance of effectively tackling burnout.
Resumo:
The present research examines the issue of universal interventions designed to enhance wellbeing among a community-based adolescent population. The first phase saw a cross-sectional survey conducted among Transition Year students in 13 secondary schools in Cork city and county, Republic of Ireland, with a view towards identifying dimensions linked with wellbeing (operationalised as subjective happiness, life satisfaction, and depressive symptoms) and which might prove effective in informing intervention approaches. Arising from this, mindfulness, gratitude, and cognitive-behavioural dimensions emerged as predictors of wellbeing, and short interventions (four sessions/four weeks) informed by each were conducted with participant groups in three secondary schools, one intervention in each school. Results from statistical analysis showed that the mindfulness and cognitive-behavioural interventions facilitated significant reductions in depressive symptoms among active condition participants at post-test, but that these benefits were not sustained over time, while no statistically significant changes were detected on subjective happiness and life satisfaction. The gratitude intervention was found to have had no effect on the three outcome variables. The findings are discussed in the context of theory and past research, while limitations, implications, and possible future directions are also addressed.
Resumo:
A search result provided by existing digital library and web search systems typically comprises only a prioritised list of possible publications or web pages that meet the search criteria, possibly with excerpts and possibly with search terms highlighted. The research in progress reported in this poster contributes to a larger research effort to provide a readable summary of search results that synthesise relevant publications or web pages to provide results that meet four C’s: comprehensive, concise, coherent, and correct, as a more useful alternative to un-synthesised result lists. The scope of this research is limited to searching for and synthesising Design Science Research (DSR) publications that present the results of DSR, as an example problem domain.
Resumo:
We discuss a general approach to dynamic sparsity modeling in multivariate time series analysis. Time-varying parameters are linked to latent processes that are thresholded to induce zero values adaptively, providing natural mechanisms for dynamic variable inclusion/selection. We discuss Bayesian model specification, analysis and prediction in dynamic regressions, time-varying vector autoregressions, and multivariate volatility models using latent thresholding. Application to a topical macroeconomic time series problem illustrates some of the benefits of the approach in terms of statistical and economic interpretations as well as improved predictions. Supplementary materials for this article are available online. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
info:eu-repo/semantics/published
Resumo:
The parallelization of existing/industrial electromagnetic software using the bulk synchronous parallel (BSP) computation model is presented. The software employs the finite element method with a preconditioned conjugate gradient-type solution for the resulting linear systems of equations. A geometric mesh-partitioning approach is applied within the BSP framework for the assembly and solution phases of the finite element computation. This is combined with a nongeometric, data-driven parallel quadrature procedure for the evaluation of right-hand-side terms in applications involving coil fields. A similar parallel decomposition is applied to the parallel calculation of electron beam trajectories required for the design of tube devices. The BSP parallelization approach adopted is fully portable, conceptually simple, and cost-effective, and it can be applied to a wide range of finite element applications not necessarily related to electromagnetics.
Resumo:
Flip-chip assembly, developed in the early 1960s, is now being positioned as a key joining technology to achieve high-density mounting of electronic components on to printed circuit boards for high-volume, low-cost products. Computer models are now being used early within the product design stage to ensure that optimal process conditions are used. These models capture the governing physics taking place during the assembly process and they can also predict relevant defects that may occur. Describes the application of computational modelling techniques that have the ability to predict a range of interacting physical phenomena associated with the manufacturing process. For example, in the flip-chip assembly process we have solder paste deposition, solder joint shape formation, heat transfer, solidification and thermal stress. Illustrates the application of modelling technology being used as part of a larger UK study aiming to establish a process route for high-volume, low-cost, sub-100-micron pitch flip-chip assembly.
Resumo:
This paper presents preliminary studies in electroplating using megasonic agitation to avoid the formation of voids within high aspect ratio microvias that are used for the redistribution of interconnects in high density interconnection technology in printed circuit boards. Through this technique, uniform deposition of metal on the side walls of the vias is possible. High frequency acoustic streaming at megasonic frequencies enables the decrease of the Nernst diffusion layer down to the sub-micron range, allowing thereby conformal electrodeposition in deep grooves. This effect enables the normally convection free liquid near the surface to be agitated. Higher throughput and better control of the material properties of the deposits can be achieved for the manufacturing of embedded interconnections and metal-based MEMS. For optimal filling performance of the microvias, a full design of experiments (DOE) and a multi-physics numerical simulation have been conducted to analyse the influence of megasonic agitation on the plating quality of the microvias. Megasonic based deposition has been found to increase the deposition rate as well as improving the quality of the metal deposits.