980 resultados para design of experiments
Resumo:
A Teflon bridge/edge-eliminator is designed to connect a glass container and a light-transparent gold-minigrid NaCl thin-layer cell to form a vertically configured in-situ FTIR spectroelectrochemical cell. The bridge/edge-eliminator sets an internal reference point for accurate potential control. The size of the thin-layer chamber is 5 X 5 X 0.11mm. A 1900-omega formal resistance of the thin-layer cell was measured in CH2Cl2/0.1 M TBAP solution. Well defined thin-layer cyclic voltammograms and IR spectral changes for ferrocene oxidation were obtained.
Resumo:
We describe the automatic synthesis of a global nonlinear controller for stabilizing a magnetic levitation system. The synthesized control system can stabilize the maglev vehicle with large initial displacements from an equilibrium, and possesses a much larger operating region than the classical linear feedback design for the same system. The controller is automatically synthesized by a suite of computational tools. This work demonstrates that the difficult control synthesis task can be automated, using programs that actively exploit knowledge of nonlinear dynamics and state space and combine powerful numerical and symbolic computations with spatial-reasoning techniques.
Resumo:
This thesis describes a mechanical assembly system called LAMA (Language for Automatic Mechanical Assembly). The goal of the work was to create a mechanical assembly system that transforms a high-level description of an automatic assembly operation into a program or execution by a computer controlled manipulator. This system allows the initial description of the assembly to be in terms of the desired effects on the parts being assembled. Languages such as WAVE [Bolles & Paul] and MINI [Silver] fail to meet this goal by requiring the assembly operation to be described in terms of manipulator motions. This research concentrates on the spatial complexity of mechanical assembly operations. The assembly problem is seen as the problem of achieving a certain set of geometrical constraints between basic objects while avoiding unwanted collisions. The thesis explores how these two facets, desired constraints and unwanted collisions, affect the primitive operations of the domain.
Resumo:
Conference paper on a CD-Rom
Resumo:
This paper investigates the effects of antenna detuning on wireless devices caused by the presence of the human body,particularly the wrist. To facilitate repeatable and consistent antenna impedance measurements, an accurate and low cost human phantom arm, that simulates human tissue at 433MHz frequencies, has been developed and characterized. An accurate and low cost hardware prototype system has been developed to measure antenna return loss at a frequency of 433MHz and the design, fabrication and measured results are presented. This system provides a flexible means of evaluating closed-loop reconfigurable antenna tuning circuits for use in wireless mote applications.
Resumo:
With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.
Resumo:
A model for understanding the formation and propagation of modes in curved optical waveguides is developed. A numerical method for the calculation of curved waveguide mode profiles and propagation constants in two dimensional waveguides is developed, implemented and tested. A numerical method for the analysis of propagation of modes in three dimensional curved optical waveguides is developed, implemented and tested. A technique for the design of curved waveguides with reduced transition loss is presented. A scheme for drawing these new waveguides and ensuring that they have constant width is also provided. Claims about the waveguide design technique are substantiated through numerical simulations.
Resumo:
For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.
Towards a situation-awareness-driven design of operational business intelligence & analytics systems
Resumo:
With the swamping and timeliness of data in the organizational context, the decision maker’s choice of an appropriate decision alternative in a given situation is defied. In particular, operational actors are facing the challenge to meet business-critical decisions in a short time and at high frequency. The construct of Situation Awareness (SA) has been established in cognitive psychology as a valid basis for understanding the behavior and decision making of human beings in complex and dynamic systems. SA gives decision makers the possibility to make informed, time-critical decisions and thereby improve the performance of the respective business process. This research paper leverages SA as starting point for a design science project for Operational Business Intelligence and Analytics systems and suggests a first version of design principles.
Resumo:
The development of a new bioprocess requires several steps from initial concept to a practical and feasible application. Industrial applications of fungal pigments will depend on: (i) safety of consumption, (ii) stability of the pigments to the food processing conditions required by the products where they will be incorporated and (iii) high production yields so that production costs are reasonable. Of these requirements the first involves the highest research costs and the practical application of this type of processes may face several hurdles until final regulatory approval as a new food ingredient. Therefore, before going through expensive research to have them accepted as new products, the process potential should be assessed early on, and this brings forward pigment stability studies and process optimisation goals. Only ingredients that are usable in economically feasible conditions should progress to regulatory approval. This thesis covers these two aspects, stability and process optimisation, for a potential new ingredient; natural red colour, produced by microbial fermentation. The main goal was to design, optimise and scale-up the production process of red pigments by Penicillium purpurogenum GH2. The approach followed to reach this objective was first to establish that pigments produced by Penicillium purpurogenum GH2 are sufficiently stable under different processing conditions (thermal and non-thermal) that can be found in food and textile industries. Once defined that pigments were stable enough, the work progressed towards process optimisation, aiming for the highest productivity using submerged fermentation as production culture. Optimum production conditions defined at flask scale were used to scale up the pigment production process to a pilot reactor scale. Finally, the potential applications of the pigments were assessed. Based on this sequence of specific targets, the thesis was structured in six parts, containing a total of nine chapters. Engineering design of a bioprocess for the production of natural red colourants by submerged fermentation of the thermophilic fungus Penicillium purpurogenum GH2.
Resumo:
BACKGROUND: The Exercise Intensity Trial (EXcITe) is a randomized trial to compare the efficacy of supervised moderate-intensity aerobic training to moderate to high-intensity aerobic training, relative to attention control, on aerobic capacity, physiologic mechanisms, patient-reported outcomes, and biomarkers in women with operable breast cancer following the completion of definitive adjuvant therapy. METHODS/DESIGN: Using a single-center, randomized design, 174 postmenopausal women (58 patients/study arm) with histologically confirmed, operable breast cancer presenting to Duke University Medical Center (DUMC) will be enrolled in this trial following completion of primary therapy (including surgery, radiation therapy, and chemotherapy). After baseline assessments, eligible participants will be randomized to one of two supervised aerobic training interventions (moderate-intensity or moderate/high-intensity aerobic training) or an attention-control group (progressive stretching). The aerobic training interventions will include 150 mins.wk⁻¹ of supervised treadmill walking per week at an intensity of 60%-70% (moderate-intensity) or 60% to 100% (moderate to high-intensity) of the individually determined peak oxygen consumption (VO₂peak) between 20-45 minutes/session for 16 weeks. The progressive stretching program will be consistent with the exercise interventions in terms of program length (16 weeks), social interaction (participants will receive one-on-one instruction), and duration (20-45 mins/session). The primary study endpoint is VO₂peak, as measured by an incremental cardiopulmonary exercise test. Secondary endpoints include physiologic determinants that govern VO₂peak, patient-reported outcomes, and biomarkers associated with breast cancer recurrence/mortality. All endpoints will be assessed at baseline and after the intervention (16 weeks). DISCUSSION: EXCITE is designed to investigate the intensity of aerobic training required to induce optimal improvements in VO₂peak and other pertinent outcomes in women who have completed definitive adjuvant therapy for operable breast cancer. Overall, this trial will inform and refine exercise guidelines to optimize recovery in breast and other cancer survivors following the completion of primary cytotoxic therapy. TRIAL REGISTRATION: NCT01186367.