879 resultados para DIGITAL SYSTEMS
Resumo:
Tedd, L.A. & Large, A. (2005). Digital libraries: principles and practice in a global environment. Munich: K.G. Saur.
Resumo:
Keynote presentation on ETHICOMP2001.
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
A new family of neural network architectures is presented. This family of architectures solves the problem of constructing and training minimal neural network classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a neural network expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a Very Large Scale Integrated (VLSI) circuit. The rules that the neural network generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.
Resumo:
Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..
Resumo:
For pt. I see ibid., vol. 44, p. 927-36 (1997). In a digital communications system, data are transmitted from one location to another by mapping bit sequences to symbols, and symbols to sample functions of analog waveforms. The analog waveform passes through a bandlimited (possibly time-varying) analog channel, where the signal is distorted and noise is added. In a conventional system the analog sample functions sent through the channel are weighted sums of one or more sinusoids; in a chaotic communications system the sample functions are segments of chaotic waveforms. At the receiver, the symbol may be recovered by means of coherent detection, where all possible sample functions are known, or by noncoherent detection, where one or more characteristics of the sample functions are estimated. In a coherent receiver, synchronization is the most commonly used technique for recovering the sample functions from the received waveform. These sample functions are then used as reference signals for a correlator. Synchronization-based coherent receivers have advantages over noncoherent receivers in terms of noise performance, bandwidth efficiency (in narrow-band systems) and/or data rate (in chaotic systems). These advantages are lost if synchronization cannot be maintained, for example, under poor propagation conditions. In these circumstances, communication without synchronization may be preferable. The theory of conventional telecommunications is extended to chaotic communications, chaotic modulation techniques and receiver configurations are surveyed, and chaotic synchronization schemes are described
Resumo:
The overall objective of this thesis is to integrate a number of micro/nanotechnologies into integrated cartridge type systems to implement such biochemical protocols. Instrumentation and systems were developed to interface such cartridge systems: (i) implementing microfluidic handling, (ii) executing thermal control during biochemical protocols and (iii) detection of biomolecules associated with inherited or infectious disease. This system implements biochemical protocols for DNA extraction, amplification and detection. A digital microfluidic chip (ElectroWetting on Dielectric) manipulated droplets of sample and reagent implementing sample preparation protocols. The cartridge system also integrated a planar magnetic microcoil device to generate local magnetic field gradients, manipulating magnetic beads. For hybridisation detection a fluorescence microarray, screening for mutations associated with CFTR gene is printed on a waveguide surface and integrated within the cartridge. A second cartridge system was developed to implement amplification and detection screening for DNA associated with disease-causing pathogens e.g. Escherichia coli. This system incorporates (i) elastomeric pinch valves isolating liquids during biochemical protocols and (ii) a silver nanoparticle microarray for fluorescent signal enhancement, using localized surface plasmon resonance. The microfluidic structures facilitated the sample and reagent to be loaded and moved between chambers with external heaters implementing thermal steps for nucleic acid amplification and detection. In a technique allowing probe DNA to be immobilised within a microfluidic system using (3D) hydrogel structures a prepolymer solution containing probe DNA was formulated and introduced into the microfluidic channel. Photo-polymerisation was undertaken forming 3D hydrogel structures attached to the microfluidic channel surface. The prepolymer material, poly-ethyleneglycol (PEG), was used to form hydrogel structures containing probe DNA. This hydrogel formulation process was fast compared to conventional biomolecule immobilization techniques and was also biocompatible with the immobilised biomolecules, as verified by on-chip hybridisation assays. This process allowed control over hydrogel height growth at the micron scale.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
Phase-locked loops (PLLs) are a crucial component in modern communications systems. Comprising of a phase-detector, linear filter, and controllable oscillator, they are widely used in radio receivers to retrieve the information content from remote signals. As such, they are capable of signal demodulation, phase and carrier recovery, frequency synthesis, and clock synchronization. Continuous-time PLLs are a mature area of study, and have been covered in the literature since the early classical work by Viterbi [1] in the 1950s. With the rise of computing in recent decades, discrete-time digital PLLs (DPLLs) are a more recent discipline; most of the literature published dates from the 1990s onwards. Gardner [2] is a pioneer in this area. It is our aim in this work to address the difficulties encountered by Gardner [3] in his investigation of the DPLL output phase-jitter where additive noise to the input signal is combined with frequency quantization in the local oscillator. The model we use in our novel analysis of the system is also applicable to another of the cases looked at by Gardner, that is the DPLL with a delay element integrated in the loop. This gives us the opportunity to look at this system in more detail, our analysis providing some unique insights into the variance `dip' seen by Gardner in [3]. We initially provide background on the probability theory and stochastic processes. These branches of mathematics are the basis for the study of noisy analogue and digital PLLs. We give an overview of the classical analogue PLL theory as well as the background on both the digital PLL and circle map, referencing the model proposed by Teplinsky et al. [4, 5]. For our novel work, the case of the combined frequency quantization and noisy input from [3] is investigated first numerically, and then analytically as a Markov chain via its Chapman-Kolmogorov equation. The resulting delay equation for the steady-state jitter distribution is treated using two separate asymptotic analyses to obtain approximate solutions. It is shown how the variance obtained in each case matches well to the numerical results. Other properties of the output jitter, such as the mean, are also investigated. In this way, we arrive at a more complete understanding of the interaction between quantization and input noise in the first order DPLL than is possible using simulation alone. We also do an asymptotic analysis of a particular case of the noisy first-order DPLL with delay, previously investigated by Gardner [3]. We show a unique feature of the simulation results, namely the variance `dip' seen for certain levels of input noise, is explained by this analysis. Finally, we look at the second-order DPLL with additive noise, using numerical simulations to see the effects of low levels of noise on the limit cycles. We show how these effects are similar to those seen in the noise-free loop with non-zero initial conditions.
Resumo:
With the growing demand for high-speed and high-quality short-range communication, multi-band orthogonal frequency division multiplexing ultra-wide band (MB-OFDM UWB) systems have recently garnered considerable interest in industry and in academia. To achieve a low-cost solution, highly integrated transceivers with small die area and minimum power consumption are required. The key building block of the transceiver is the frequency synthesizer. A frequency synthesizer comprised of two PLLs and one multiplexer is presented in this thesis. Ring oscillators are adopted for PLL implementation in order to drastically reduce the die area of the frequency synthesizer. The poor spectral purity appearing in the frequency synthesizers involving mixers is greatly improved in this design. Based on the specifications derived from application standards, a design methodology is presented to obtain the parameters of building blocks. As well, the simulation results are provided to verify the performance of proposed design.
Resumo:
Droplet-based digital microfluidics technology has now come of age, and software-controlled biochips for healthcare applications are starting to emerge. However, today's digital microfluidic biochips suffer from the drawback that there is no feedback to the control software from the underlying hardware platform. Due to the lack of precision inherent in biochemical experiments, errors are likely during droplet manipulation; error recovery based on the repetition of experiments leads to wastage of expensive reagents and hard-to-prepare samples. By exploiting recent advances in the integration of optical detectors (sensors) into a digital microfluidics biochip, we present a physical-aware system reconfiguration technique that uses sensor data at intermediate checkpoints to dynamically reconfigure the biochip. A cyberphysical resynthesis technique is used to recompute electrode-actuation sequences, thereby deriving new schedules, module placement, and droplet routing pathways, with minimum impact on the time-to-response. © 2012 IEEE.
Resumo:
Gemstone Team SHINE (Students Helping to Implement Natural Energy)
Resumo:
Gemstone Team FASTR (Finding Alternative Specialized Travel Routes)
Resumo:
Many food production methods are both economically and environmentally unsustainable. Our project investigated aquaponics, an alternative method of agriculture that could address these issues. Aquaponics combines fish and plant crop production in a symbiotic, closed-loop system. We aimed to reduce the initial and operating costs of current aquaponic systems by utilizing alternative feeds. These improvements may allow for sustainable implementation of the system in rural or developing regions. We conducted a multi-phase process to determine the most affordable and effective feed alternatives for use in an aquaponic system. At the end of two preliminary phases, soybean meal was identified as the most effective potential feed supplement. In our final phase, we constructed and tested six full-scale aquaponic systems of our own design. Data showed that soybean meal can be used to reduce operating costs and reliance on fishmeal. However, a more targeted investigation is needed to identify the optimal formulation of alternative feed blends.
Resumo:
Abstract: New product design challenges, related to customer needs, product usage and environments, face companies when they expand their product offerings to new markets; Some of the main challenges are: the lack of quantifiable information, product experience and field data. Designing reliable products under such challenges requires flexible reliability assessment processes that can capture the variables and parameters affecting the product overall reliability and allow different design scenarios to be assessed. These challenges also suggest a mechanistic (Physics of Failure-PoF) reliability approach would be a suitable framework to be used for reliability assessment. Mechanistic Reliability recognizes the primary factors affecting design reliability. This research views the designed entity as a “system of components required to deliver specific operations”; it addresses the above mentioned challenges by; Firstly: developing a design synthesis that allows a descriptive operations/ system components relationships to be realized; Secondly: developing component’s mathematical damage models that evaluate components Time to Failure (TTF) distributions given: 1) the descriptive design model, 2) customer usage knowledge and 3) design material properties; Lastly: developing a procedure that integrates components’ damage models to assess the mechanical system’s reliability over time. Analytical and numerical simulation models were developed to capture the relationships between operations and components, the mathematical damage models and the assessment of system’s reliability. The process was able to affect the design form during the conceptual design phase by providing stress goals to meet component’s reliability target. The process was able to numerically assess the reliability of a system based on component’s mechanistic TTF distributions, besides affecting the design of the component during the design embodiment phase. The process was used to assess the reliability of an internal combustion engine manifold during design phase; results were compared to reliability field data and found to produce conservative reliability results. The research focused on mechanical systems, affected by independent mechanical failure mechanisms that are influenced by the design process. Assembly and manufacturing stresses and defects’ influences are not a focus of this research.