9 resultados para 100602 Input Output and Data Devices

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1966, Roy Geary, Director of the ESRI, noted “the absence of any kind of import and export statistics for regions is a grave lacuna” and further noted that if regional analyses were to be developed then regional Input-Output Tables must be put on the “regular statistical assembly line”. Forty-five years later, the lacuna lamented by Geary still exists and remains the most significant challenge to the construction of regional Input-Output Tables in Ireland. The continued paucity of sufficient regional data to compile effective regional Supply and Use and Input-Output Tables has retarded the capacity to construct sound regional economic models and provide a robust evidence base with which to formulate and assess regional policy. This study makes a first step towards addressing this gap by presenting the first set of fully integrated, symmetric, Supply and Use and domestic Input-Output Tables compiled for the NUTS 2 regions in Ireland: The Border, Midland and Western region and the Southern & Eastern region. These tables are general purpose in nature and are consistent fully with the official national Supply & Use and Input-Output Tables, and the regional accounts. The tables are constructed using a survey-based or bottom-up approach rather than employing modelling techniques, yielding more robust and credible tables. These tables are used to present a descriptive statistical analysis of the two administrative NUTS 2 regions in Ireland, drawing particular attention to the underlying structural differences of regional trade balances and composition of Gross Value Added in those regions. By deriving regional employment multipliers, Domestic Demand Employment matrices are constructed to quantify and illustrate the supply chain impact on employment. In the final part of the study, the predictive capability of the Input-Output framework is tested over two time periods. For both periods, the static Leontief production function assumptions are relaxed to allow for labour productivity. Comparative results from this experiment are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis described the development of low-cost sensing and separation devices with electrochemical detections for health applications. This research employs macro, micro and nano technology. The first sensing device developed was a tonerbased micro-device. The initial development of microfluidic devices was based on glass or quartz devices that are often expensive to fabricate; however, the introduction of new types of materials, such as plastics, offered a new way for fast prototyping and the development of disposable devices. One such microfluidic device is based on the lamination of laser-printed polyester films using a computer, printer and laminator. The resulting toner-based microchips demonstrated a potential viability for chemical assays, coupled with several detection methods, particularly Chip-Electrophoresis-Chemiluminescence (CE-CL) detection which has never been reported in the literature. Following on from the toner-based microchip, a three-electrode micro-configuration was developed on acetate substrate. This is the first time that a micro-electrode configuration made from gold; silver and platinum have been fabricated onto acetate by means of patterning and deposition techniques using the central fabrication facilities in Tyndall National Institute. These electrodes have been designed to facilitate the integration of a 3- electrode configuration as part of the fabrication process. Since the electrodes are on acetate the dicing step can automatically be eliminated. The stability of these sensors has been investigated using electrochemical techniques with excellent outcomes. Following on from the generalised testing of the electrodes these sensors were then coupled with capillary electrophoresis. The final sensing devices were on a macro scale and involved the modifications of screenprinted electrodes. Screen-printed electrodes (SPE) are generally seen to be far less sensitive than the more expensive electrodes including the gold, boron-doped diamond and glassy carbon electrodes. To enhance the sensitivity of these electrodes they were treated with metal nano-particles, gold and palladium. Following on from this, another modification was introduced. The carbonaceous material carbon monolith was drop-cast onto the SPE and then the metal nano-particles were electrodeposited onto the monolith material

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the drivers of innovation in Irish high-technology businesses and estimates, in particular, the relative importance of interaction with external businesses and other organisations as a source of knowledge for innovation at the business-level. The thesis also examines the extent to which interaction for innovation in these businesses occurs on a local or regional basis. The study uses original survey data of 184 businesses in the Chemical and Pharmaceutical, Information and Communications Technology and Engineering and Electronic Devices sectors. The study considers both product and process innovation at the level of the business and develops new measures of innovation output. For the first time in an Irish study, the incidence and frequency of interaction is measured for each of a range of agents, other group companies, suppliers, customers, competitors, academic-based researchers and innovation-supporting agencies. The geographic proximity between the business and each of the most important of each of each category of agent is measured using average one-way driving distance, which is the first time such a measure has been used in an Irish study of innovation. Utilising econometric estimation techniques, it is found that interaction with customers, suppliers and innovation-supporting agencies is positively associated with innovation in Irish high-technology businesses. Surprisingly, however, interaction with academic-based researchers is found to have a negative effect on innovation output at the business-level. While interaction generally emerges as a positive influence on business innovation, there is little evidence that this occurs at a local or regional level. Furthermore, there is little support for the presence of localisation economies for high-technology sectors, though some tentative evidence of urbanisation economies. This has important implications for Irish regional, enterprise and innovation policy, which has emphasised the development of clusters of internationally competitive businesses. The thesis brings into question the suitability of a cluster-driven network based approach to business development and competitiveness in an Irish context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been suggested that the less than optimal levels of students’ immersion language “persist in part because immersion teachers lack systematic approaches for integrating language into their content instruction” (Tedick, Christian and Fortune, 2011, p.7). I argue that our current lack of knowledge regarding what immersion teachers think, know and believe and what immersion teachers’ actual ‘lived’ experiences are in relation to form-focused instruction (FFI) prevents us from fully understanding the key issues at the core of experiential immersion pedagogy and form-focused integration. FFI refers to “any planned or incidental instructional activity that is intended to induce language learners to pay attention to linguistic form” (Ellis, 2001b, p.1). The central aim of this research study is to critically examine the perspectives and practices of Irish-medium immersion (IMI) teachers in relation to FFI. The study ‘taps’ into the lived experiences of three IMI teachers in three different IMI school contexts and explores FFI from a classroom-based, teacher-informed perspective. Philosophical underpinnings of the interpretive paradigm and critical hermeneutical principles inform and guide the study. A multi-case study approach was adopted and data was gathered through classroom observation, video-stimulated recall and semistructured interviews. Findings revealed that the journey of ‘becoming’ an IMI teacher is shaped by a vast array of intricate variables. IMI teacher identity, implicit theories, stated beliefs, educational biographies and experiences, IMI school cultures and contexts as well as teacher knowledge and competence impacted on IMI teachers’ FFI perspectives and practices. An IMI content teacher identity reflected the teachers’ priorities as shaped by pedagogical challenges and their educational backgrounds. While research participants had clearly defined instructional beliefs and goals, their roadmap of how to actually accomplish these goals was far from clear. IMI teachers described the multitude of choices and pedagogical dilemmas they faced in integrating FFI into experiential pedagogy. Significant gaps in IMI teachers’ declarative knowledge about and competence in the immersion language were also reported. This research study increases our understanding of the complexity of the processes underlying and shaping FFI pedagogy in IMI education. Innovative FFI opportunities for professional development across the continuum of teacher education are outlined, a comprehensive evaluation of IMI is called for and areas for further research are delineated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body Sensor Network (BSN) technology is seeing a rapid emergence in application areas such as health, fitness and sports monitoring. Current BSN wireless sensors typically operate on a single frequency band (e.g. utilizing the IEEE 802.15.4 standard that operates at 2.45GHz) employing a single radio transceiver for wireless communications. This allows a simple wireless architecture to be realized with low cost and power consumption. However, network congestion/failure can create potential issues in terms of reliability of data transfer, quality-of-service (QOS) and data throughput for the sensor. These issues can be especially critical in healthcare monitoring applications where data availability and integrity is crucial. The addition of more than one radio has the potential to address some of the above issues. For example, multi-radio implementations can allow access to more than one network, providing increased coverage and data processing as well as improved interoperability between networks. A small number of multi-radio wireless sensor solutions exist at present but require the use of more than one radio transceiver devices to achieve multi-band operation. This paper presents the design of a novel prototype multi-radio hardware platform that uses a single radio transceiver. The proposed design allows multi-band operation in the 433/868MHz ISM bands and this, together with its low complexity and small form factor, make it suitable for a wide range of BSN applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this project is to integrate neuronal cell culture with commercial or in-house built micro-electrode arrays and MEMS devices. The resulting device is intended to support neuronal cell culture on its surface, expose specific portions of a neuronal population to different environments using microfluidic gradients and stimulate/record neuronal electrical activity using micro-electrode arrays. Additionally, through integration of chemical surface patterning, such device can be used to build neuronal cell networks of specific size, conformation and composition. The design of this device takes inspiration from the nervous system because its development and regeneration are heavily influenced by surface chemistry and fluidic gradients. Hence, this device is intended to be a step forward in neuroscience research because it utilizes similar concepts to those found in nature. The large part of this research revolved around solving technical issues associated with integration of biology, surface chemistry, electrophysiology and microfluidics. Commercially available microelectrode arrays (MEAs) are mechanically and chemically brittle making them unsuitable for certain surface modification and micro-fluidic integration techniques described in the literature. In order to successfully integrate all the aspects into one device, some techniques were heavily modified to ensure that their effects on MEA were minimal. In terms of experimental work, this thesis consists of 3 parts. The first part dealt with characterization and optimization of surface patterning and micro-fluidic perfusion. Through extensive image analysis, the optimal conditions required for micro-contact printing and micro-fluidic perfusion were determined. The second part used a number of optimized techniques and successfully applied these to culturing patterned neural cells on a range of substrates including: Pyrex, cyclo-olefin and SiN coated Pyrex. The second part also described culturing neurons on MEAs and recording electrophysiological activity. The third part of the thesis described integration of MEAs with patterned neuronal culture and microfluidic devices. Although integration of all methodologies proved difficult, a large amount of data relating to biocompatibility, neuronal patterning, electrophysiology and integration was collected. Original solutions were successfully applied to solve a number of issues relating to consistency of micro printing and microfluidic integration leading to successful integration of techniques and device components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mobile cloud computing model promises to address the resource limitations of mobile devices, but effectively implementing this model is difficult. Previous work on mobile cloud computing has required the user to have a continuous, high-quality connection to the cloud infrastructure. This is undesirable and possibly infeasible, as the energy required on the mobile device to maintain a connection, and transfer sizeable amounts of data is large; the bandwidth tends to be quite variable, and low on cellular networks. The cloud deployment itself needs to efficiently allocate scalable resources to the user as well. In this paper, we formulate the best practices for efficiently managing the resources required for the mobile cloud model, namely energy, bandwidth and cloud computing resources. These practices can be realised with our mobile cloud middleware project, featuring the Cloud Personal Assistant (CPA). We compare this with the other approaches in the area, to highlight the importance of minimising the usage of these resources, and therefore ensure successful adoption of the model by end users. Based on results from experiments performed with mobile devices, we develop a no-overhead decision model for task and data offloading to the CPA of a user, which provides efficient management of mobile cloud resources.