934 resultados para ROBUST DESIGN
Resumo:
There is an increasing demand for optimising complete systems and the devices within that system, including capturing the interactions between the various multi-disciplinary (MD) components involved. Furthermore confidence in robust solutions is esential. As a consequence the computational cost rapidly increases and in many cases becomes infeasible to perform such conceptual designs. A coherent design methodology is proposed, where the aim is to improve the design process by effectively exploiting the potential of computational synthesis, search and optimisation and conventional simulation, with a reduction of the computational cost. This optimization framework consists of a hybrid optimization algorithm to handles multi-fidelity simulations. Simultaneously and in order to handles uncertainty without recasting the model and at affordable computational cost, a stochastic modelling method known as non-intrusive polynomial chaos is introduced. The effectiveness of the design methodology is demonstrated with the optimisation of a submarine propulsion system.
Resumo:
Vertically aligned carbon nanotube (CNT) 'forest' microstructures fabricated by chemical vapor deposition (CVD) using patterned catalyst films typically have a low CNT density per unit area. As a result, CNT forests have poor bulk properties and are too fragile for integration with microfabrication processing. We introduce a new self-directed capillary densification method where a liquid is controllably condensed onto and evaporated from the CNT forests. Compared to prior approaches, where the substrate with CNTs is immersed in a liquid, our condensation approach gives significantly more uniform structures and enables precise control of the CNT packing density. We present a set of design rules and parametric studies of CNT micropillar densification by self-directed capillary action, and show that self-directed capillary densification enhances Young's modulus and electrical conductivity of CNT micropillars by more than three orders of magnitude. Owing to the outstanding properties of CNTs, this scalable process will be useful for the integration of CNTs as a functional material in microfabricated devices for mechanical, electrical, thermal and biomedical applications. © 2011 IOP Publishing Ltd.
Resumo:
Access to robust and information-rich human cardiac tissue models would accelerate drug-based strategies for treating heart disease. Despite significant effort, the generation of high-fidelity adult-like human cardiac tissue analogs remains challenging. We used computational modeling of tissue contraction and assembly mechanics in conjunction with microfabricated constraints to guide the design of aligned and functional 3D human pluripotent stem cell (hPSC)-derived cardiac microtissues that we term cardiac microwires (CMWs). Miniaturization of the platform circumvented the need for tissue vascularization and enabled higher-throughput image-based analysis of CMW drug responsiveness. CMW tissue properties could be tuned using electromechanical stimuli and cell composition. Specifically, controlling self-assembly of 3D tissues in aligned collagen, and pacing with point stimulation electrodes, were found to promote cardiac maturation-associated gene expression and in vivo-like electrical signal propagation. Furthermore, screening a range of hPSC-derived cardiac cell ratios identified that 75% NKX2 Homeobox 5 (NKX2-5)+ cardiomyocytes and 25% Cluster of Differentiation 90 OR (CD90)+ nonmyocytes optimized tissue remodeling dynamics and yielded enhanced structural and functional properties. Finally, we demonstrate the utility of the optimized platform in a tachycardic model of arrhythmogenesis, an aspect of cardiac electrophysiology not previously recapitulated in 3D in vitro hPSC-derived cardiac microtissue models. The design criteria identified with our CMW platform should accelerate the development of predictive in vitro assays of human heart tissue function.
Resumo:
Self-excited oscillation is becoming a major issue in low-emission, lean partially premixed combustion systems, and active control has been shown to be a feasible method to suppress such instabilities. A number of robust control methods are employed to obtain a feedback controller and it is observed that the robustness to system uncertainty is significantly better for a low complexity controller in spite of the norms being similar. Moreover, we demonstrate that closed-loop stability for such a complex system can be proved via use of the integral quadratic constraint method. Open- and closed-loop nonlinear simulations are provided. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Robust climbing in unstructured environments has been one of the long-standing challenges in robotics research. Among others, the control of large adhesion forces is still an important problem that significantly restricts the locomotion performance of climbing robots. The main contribution of this paper is to propose a novel approach to autonomous robot climbing which makes use of hot melt adhesion (HMA). The HMA material is known as an economical solution to achieve large adhesion forces, which can be varied by controlling the material temperature. For locomotion on both inclined and vertical walls, this paper investigates the basic characteristics of HMA material, and proposes a design and control of a climbing robot that uses the HMA material for attaching and detaching its body to the environment. The robot is equipped with servomotors and thermal control units to actively vary the temperature of the material, and the coordination of these components enables the robot to walk against the gravitational forces even with a relatively large body weight. A real-world platform is used to demonstrate locomotion on a vertical wall, and the experimental result shows the feasibility and overall performances of this approach. © 2013 Elsevier B.V. All rights reserved.
Resumo:
Robust climbing in unstructured environments is a long-standing challenge in robotics research. Recently there has been an increasing interest in using adhesive materials for that purpose. For example, a climbing robot using hot melt adhesives (HMAs) has demonstrated advantages in high attachment strength, reasonable operation costs, and applicability to different surfaces. Despite the advantages, there still remain several problems related to the attachment and detachment operations, which prevent this approach from being used in a broader range of applications. Among others, one of the main problems lies in the fact that the adhesive characteristics of this material were not fully understood fin the context of robotic climbing locomotion. As a result, the previous robot often could not achieve expected locomotion performances and "contaminated" the environment with HMAs left behind. In order to improve the locomotion performances, this paper focuses on attachment and detachment operations in robot climbing with HMAs. By systematically analyzing the adhesive property and bonding strength of HMAs to different materials, we propose a novel detachment mechanism that substantially improves climbing performances without HMA traces. © 2012 IEEE.
Resumo:
低损耗高强度碲酸盐玻璃光纤用光学材料的优化方案
Resumo:
With the target to design and develop new functionalized green triplet light emitters that possess distinctive electronic properties for robust and highly efficient phosphorescent organic light-emitting diodes (PHOLEDs), a series of bluish-green to yellow-green phosphorescent tris-cyclometalated homoleptic iridium(III) complexes [Ir(ppy-X)(3)] (X=SiPh3, GePh3, NPh2, POPh2, OPh, SPh, SO2Ph, Hppy=2-phenylpyridine) have been synthesized and fully characterized by spectroscopic, redox, and photophysical methods
Resumo:
针对具有时变不确定性且不确定性界为椭球的线性系统提出了一种新的具有自适应机制的鲁棒保性能控制器设计方法。首先,引入一个具有可由自适应律在线调整的可调参数的目标模型,通过该参数来保证由目标模型与被控模型所获得的误差系统渐近稳定。结合保证目标模型稳定性的设计,最终形成保证闭环系统稳定且控制器增益仿射依赖于可调参数的鲁棒保性能跟踪控制器。应用于安装在试验平台上的小型直升机航向控制中,仿真试验表明了该方法的有效性。
Resumo:
We propose a new characterization of protein structure based on the natural tetrahedral geometry of the β carbon and a new geometric measure of structural similarity, called visible volume. In our model, the side-chains are replaced by an ideal tetrahedron, the orientation of which is fixed with respect to the backbone and corresponds to the preferred rotamer directions. Visible volume is a measure of the non-occluded empty space surrounding each residue position after the side-chains have been removed. It is a robust, parameter-free, locally-computed quantity that accounts for many of the spatial constraints that are of relevance to the corresponding position in the native structure. When computing visible volume, we ignore the nature of both the residue observed at each site and the ones surrounding it. We focus instead on the space that, together, these residues could occupy. By doing so, we are able to quantify a new kind of invariance beyond the apparent variations in protein families, namely, the conservation of the physical space available at structurally equivalent positions for side-chain packing. Corresponding positions in native structures are likely to be of interest in protein structure prediction, protein design, and homology modeling. Visible volume is related to the degree of exposure of a residue position and to the actual rotamers in native proteins. In this article, we discuss the properties of this new measure, namely, its robustness with respect to both crystallographic uncertainties and naturally occurring variations in atomic coordinates, and the remarkable fact that it is essentially independent of the choice of the parameters used in calculating it. We also show how visible volume can be used to align protein structures, to identify structurally equivalent positions that are conserved in a family of proteins, and to single out positions in a protein that are likely to be of biological interest. These properties qualify visible volume as a powerful tool in a variety of applications, from the detailed analysis of protein structure to homology modeling, protein structural alignment, and the definition of better scoring functions for threading purposes.
Resumo:
Drug delivery systems influence the various processes of release, absorption, distribution and elimination of drug. Conventional delivery methods administer drug through the mouth, the skin, transmucosal areas, inhalation or injection. However, one of the current challenges is the lack of effective and targeted oral drug administration. Development of sophisticated strategies, such as micro- and nanotechnology that can integrate the design and synthesis of drug delivery systems in a one-step, scalable process is fundamental in advancing the limitations of conventional processing techniques. Thus, the objective of this thesis is to evaluate novel microencapsulation technologies in the production of size-specific and target-specific drug-loaded particles. The first part of this thesis describes the utility of PDMS and silicon microfluidic flow focusing devices (MFFDs) to produce PLGA-based microparticles. The formation of uniform droplets was dependent on the surface of PDMS remaining hydrophilic. However, the durability of PDMS was limited to no more than 1 hour before wetting of the microchannel walls with dichloromethane and subsequent swelling occurred. Critically, silicon MFFDs revealed very good solvent compatibility and was sufficiently robust to withstand elevated fluid flow rates. Silicon MFFDs facilitated experiments to run over days with continuous use and re-use of the device with a narrower microparticle size distribution, relative to conventional production techniques. The second part of this thesis demonstrates an alternative microencapsulation technology, SmPill® minispheres, to target CsA delivery to the colon. Characterisation of CsA release in vitro and in vivo was performed. By modulating the ethylcellulose:pectin coating thickness, release of CsA in-vivo was more effectively controlled compared to current commercial CsA formulations and demonstrated a linear in-vitro in-vivo relationship. Coated minispheres were shown to limit CsA release in the upper small intestine and enhance localised CsA delivery to the colon.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
The paper describes the design of an efficient and robust genetic algorithm for the nuclear fuel loading problem (i.e., refuellings: the in-core fuel management problem) - a complex combinatorial, multimodal optimisation., Evolutionary computation as performed by FUELGEN replaces heuristic search of the kind performed by the FUELCON expert system (CAI 12/4), to solve the same problem. In contrast to the traditional genetic algorithm which makes strong requirements on the representation used and its parameter setting in order to be efficient, the results of recent research results on new, robust genetic algorithms show that representations unsuitable for the traditional genetic algorithm can still be used to good effect with little parameter adjustment. The representation presented here is a simple symbolic one with no linkage attributes, making the genetic algorithm particularly easy to apply to fuel loading problems with differing core structures and assembly inventories. A nonlinear fitness function has been constructed to direct the search efficiently in the presence of the many local optima that result from the constraint on solutions.
Resumo:
Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.
Resumo:
Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rulesets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.