6 resultados para full implementation

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The flip-chip technology is a high chip density solution to meet the demand for very large scale integration design. For wireless sensor node or some similar RF applications, due to the growing requirements for the wearable and implantable implementations, flip-chip appears to be a leading technology to realize the integration and miniaturization. In this paper, flip-chip is considered as part of the whole system to affect the RF performance. A simulation based design is presented to transfer the surface mount PCB board to the flip-chip die package for the RF applications. Models are built by Q3D Extractor to extract the equivalent circuit based on the parasitic parameters of the interconnections, for both bare die and wire-bonding technologies. All the parameters and the PCB layout and stack-up are then modeled in the essential parts' design of the flip-chip RF circuit. By implementing simulation and optimization, a flip-chip package is re-designed by the parameters given by simulation sweep. Experimental results fit the simulation well for the comparison between pre-optimization and post-optimization of the bare die package's return loss performance. This design method could generally be used to transfer any surface mount PCB to flip-chip package for the RF systems or to predict the RF specifications of a RF system using the flip-chip technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the embedded capacitance material (ECM) is fabricated between the power and ground layers of the wireless sensor nodes, forming an integrated capacitance to replace the large amount of decoupling capacitors on the board. The ECM material, whose dielectric constant is 16, has the same size of the wireless sensor nodes of 3cm*3cm, with a thickness of only 14μm. Though the capacitance of a single ECM layer being only around 8nF, there are two reasons the ECM layers can still replace the high frequency decoupling capacitors (100nF in our case) on the board. The first reason is: the parasitic inductance of the ECM layer is much lower than the surface mount capacitors'. A smaller capacitance value of the ECM layer could achieve the same resonant frequency of the surface mount decoupling capacitors. Simulation and measurement fit this assumption well. The second reason is: more than one layer of ECM material are utilized during the design step to get a parallel connection of the several ECM capacitance layers, finally leading to a larger value of the capacitance and smaller value of parasitic. Characterization of the ECM is carried out by the LCR meter. To evaluate the behaviors of the ECM layer, time and frequency domain measurements are performed on the power-bus decoupling of the wireless sensor nodes. Comparison with the measurements of bare PCB board and decoupling capacitors solution are provided to show the improvement of the ECM layer. Measurements show that the implementation of the ECM layer can not only save the space of the surface mount decoupling capacitors, but also provide better power-bus decoupling to the nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of ultra high speed (~20 Gsamples/s) analogue to digital converters (ADCs), and the delayed deployment of 40 Gbit/s transmission due to the economic downturn, has stimulated the investigation of digital signal processing (DSP) techniques for compensation of optical transmission impairments. In the future, DSP will offer an entire suite of tools to compensate for optical impairments and facilitate the use of advanced modulation formats. Chromatic dispersion is a very significant impairment for high speed optical transmission. This thesis investigates a novel electronic method of dispersion compensation which allows for cost-effective accurate detection of the amplitude and phase of the optical field into the radio frequency domain. The first electronic dispersion compensation (EDC) schemes accessed only the amplitude information using square law detection and achieved an increase in transmission distances. This thesis presents a method by using a frequency sensitive filter to estimate the phase of the received optical field and, in conjunction with the amplitude information, the entire field can be digitised using ADCs. This allows DSP technologies to take the next step in optical communications without requiring complex coherent detection. This is of particular of interest in metropolitan area networks. The full-field receiver investigated requires only an additional asymmetrical Mach-Zehnder interferometer and balanced photodiode to achieve a 50% increase in EDC reach compared to amplitude only detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The healthcare industry is beginning to appreciate the benefits which can be obtained from using Mobile Health Systems (MHS) at the point-of-care. As a result, healthcare organisations are investing heavily in mobile health initiatives with the expectation that users will employ the system to enhance performance. Despite widespread endorsement and support for the implementation of MHS, empirical evidence surrounding the benefits of MHS remains to be fully established. For MHS to be truly valuable, it is argued that the technological tool be infused within healthcare practitioners work practices and used to its full potential in post-adoptive scenarios. Yet, there is a paucity of research focusing on the infusion of MHS by healthcare practitioners. In order to address this gap in the literature, the objective of this study is to explore the determinants and outcomes of MHS infusion by healthcare practitioners. This research study adopts a post-positivist theory building approach to MHS infusion. Existing literature is utilised to develop a conceptual model by which the research objective is explored. Employing a mixed-method approach, this conceptual model is first advanced through a case study in the UK whereby propositions established from the literature are refined into testable hypotheses. The final phase of this research study involves the collection of empirical data from a Canadian hospital which supports the refined model and its associated hypotheses. The results from both phases of data collection are employed to develop a model of MHS infusion. The study contributes to IS theory and practice by: (1) developing a model with six determinants (Availability, MHS Self-Efficacy, Time-Criticality, Habit, Technology Trust, and Task Behaviour) and individual performance-related outcomes of MHS infusion (Effectiveness, Efficiency, and Learning), (2) examining undocumented determinants and relationships, (3) identifying prerequisite conditions that both healthcare practitioners and organisations can employ to assist with MHS infusion, (4) developing a taxonomy that provides conceptual refinement of IT infusion, and (5) informing healthcare organisations and vendors as to the performance of MHS in post-adoptive scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled