935 resultados para design driven
Resumo:
Resonant-based vibration harvesters have conventionally relied upon accessing the fundamental mode of directly excited resonance to maximize the conversion efficiency of mechanical-to-electrical power transduction. This paper explores the use of parametric resonance, which unlike the former, the resonant-induced amplitude growth, is not limited by linear damping and wherein can potentially offer higher and broader nonlinear peaks. A numerical model has been constructed to demonstrate the potential improvements over the convention. Despite the promising potential, a damping-dependent initiation threshold amplitude has to be attained prior to accessing this alternative resonant phenomenon. Design approaches have been explored to passively reduce this initiation threshold. Furthermore, three representative MEMS designs were fabricated with both 25 and 10 μm thick device silicon. The devices include electrostatic cantilever-based harvesters, with and without the additional design modification to overcome initiation threshold amplitude. The optimum performance was recorded for the 25 μm thick threshold-aided MEMS prototype with device volume ∼0.147 mm3. When driven at 4.2 ms -2, this prototype demonstrated a peak power output of 10.7 nW at the fundamental mode of resonance and 156 nW at the principal parametric resonance, as well as a 23-fold decrease in initiation threshold over the purely parametric prototype. An approximate doubling of the half-power bandwidth was also observed for the parametrically excited scenario. © 2013 IOP Publishing Ltd.
Resumo:
The delivery of integrated product and service solutions is growing in the aerospace industry, driven by the potential of increasing profits. Such solutions require a life cycle view at the design phase in order to support the delivery of the equipment. The influence of uncertainty associated with design for services is increasingly a challenge due to information and knowledge constraints. There is a lack of frameworks that aim to define and quantify relationship between information and knowledge with uncertainty. Driven by this gap, the paper presents a framework to illustrate the link between uncertainty and knowledge within the design context for services in the aerospace industry. The paper combines industrial interaction and literature review to initially define the design attributes, the associated knowledge requirements and the uncertainties experienced. The framework is then applied in three cases through development of causal loop models (CLMs), which are validated by industrial and academic experts. The concepts and inter-linkages are developed with the intention of developing a software prototype. Future recommendations are also included. © 2014 CIRP.
Resumo:
During the past. decades, large-scale national neutron sources have been developed in Asia, Europe, and North America. Complementing such efforts, compact hadron beam complexes and neutron sources intended to serve primarily universities and industrial institutes have been proposed, and some have recently been established. Responding to the demand in China for pulsed neutron/proton-beam platforms that are dedicated to fundamental and applied research for users in multiple disciplines from materials characterization to hadron therapy and radiography to accelerator-driven sub-critical reactor systems (ADS) for nuclear waste transmutation, we have initiated the construction of a compact, yet expandable, accelerator complex-the Compact Pulsed Hadron Source (CPHS). It consists of an accelerator front-end (a high-intensity ion source, a 3-MeV radio-frequency quadrupole linac (RFQ), and a 13-MeV drift-tube linac (DTL)), a neutron target station (a beryllium target with solid methane and room-temperature water moderators/reflector), and experimental stations for neutron imaging/radiography, small-angle scattering, and proton irradiation. In the future, the CPHS may also serve as an injector to a ring for proton therapy and radiography or as the front end to an ADS test facility. In this paper, we describe the design of the CPHS technical systems and its intended operation.
Resumo:
Parallel shared-memory machines with hundreds or thousands of processor-memory nodes have been built; in the future we will see machines with millions or even billions of nodes. Associated with such large systems is a new set of design challenges. Many problems must be addressed by an architecture in order for it to be successful; of these, we focus on three in particular. First, a scalable memory system is required. Second, the network messaging protocol must be fault-tolerant. Third, the overheads of thread creation, thread management and synchronization must be extremely low. This thesis presents the complete system design for Hamal, a shared-memory architecture which addresses these concerns and is directly scalable to one million nodes. Virtual memory and distributed objects are implemented in a manner that requires neither inter-node synchronization nor the storage of globally coherent translations at each node. We develop a lightweight fault-tolerant messaging protocol that guarantees message delivery and idempotence across a discarding network. A number of hardware mechanisms provide efficient support for massive multithreading and fine-grained synchronization. Experiments are conducted in simulation, using a trace-driven network simulator to investigate the messaging protocol and a cycle-accurate simulator to evaluate the Hamal architecture. We determine implementation parameters for the messaging protocol which optimize performance. A discarding network is easier to design and can be clocked at a higher rate, and we find that with this protocol its performance can approach that of a non-discarding network. Our simulations of Hamal demonstrate the effectiveness of its thread management and synchronization primitives. In particular, we find register-based synchronization to be an extremely efficient mechanism which can be used to implement a software barrier with a latency of only 523 cycles on a 512 node machine.
Resumo:
Previous research in force control has focused on the choice of appropriate servo implementation without corresponding regard to the choice of mechanical hardware. This report analyzes the effect of mechanical properties such as contact compliance, actuator-to-joint compliance, torque ripple, and highly nonlinear dry friction in the transmission mechanisms of a manipulator. A set of requisites for high performance then guides the development of mechanical-design and servo strategies for improved performance. A single-degree-of-freedom transmission testbed was constructed that confirms the predicted effect of Coulomb friction on robustness; design and construction of a cable-driven, four-degree-of- freedom, "whole-arm" manipulator illustrates the recommended design strategies.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.
Resumo:
Optimisation in wireless sensor networks is necessary due to the resource constraints of individual devices, bandwidth limits of the communication channel, relatively high probably of sensor failure, and the requirement constraints of the deployed applications in potently highly volatile environments. This paper presents BioANS, a protocol designed to optimise a wireless sensor network for resource efficiency as well as to meet a requirement common to a whole class of WSN applications - namely that the sensor nodes are dynamically selected on some qualitative basis, for example the quality by which they can provide the required context information. The design of BioANS has been inspired by the communication mechanisms that have evolved in natural systems. The protocol tolerates randomness in its environment, including random message loss, and incorporates a non-deterministic ’delayed-bids’ mechanism. A simulation model is used to explore the protocol’s performance in a wide range of WSN configurations. Characteristics evaluated include tolerance to sensor node density and message loss, communication efficiency, and negotiation latency .
Resumo:
This paper discusses the Design for Reliability modelling of several System-in-Package (SiP) structures developed by NXP and advanced on the basis of Wafer Level Packaging (WLP). Two different types of Wafer Level SiP (WLSiP) are presented and discussed. The main focus is on the modelling approach that has been adopted to investigate and analyse the board level reliability of the presented SiP configurations. Thermo-mechanical non-linear Finite Element Analysis (FEA) is used to analyse the effect of various package design parameters on the reliability of the structures and to identify design trends towards package optimisation. FEA is used also to gain knowledge on moulded wafer shrinkage and related issues during the wafer level fabrication. The paper provides a brief outline and demonstration of a design methodology for reliability driven design optimisation of SiP. The study emphasises the advantages of applying the methodology to address complex design problems where several requirements may exist and uncertainties and interactions between parameters in the design are common.
Resumo:
Product knowledge support needs are compared in two companies with different production volumes and product complexity. Knowledge support requirements identified include: function, performance data, requirements data, common parts, regulatory guidelines and layout data. A process based data driven knowledge reuse method is evaluated in light of the identified product knowledge needs. The evaluation takes place through developing a pilot case with each company. It is found that the method provides more benefit to the high complexity design domain, in which a significant amount of work takes place at the conceptual design stages, relying on a conceptual product representation. There is not such a clear value proposition in a design environment whose main challenge is layout design and the application of standard parts and features. The method supports the requirement for conceptual product representation but does not fully support a standard parts library.
Resumo:
The traditional planning process in the UK and elsewhere takes too long to develop, are demanding on resources that are scarce and most times tend to be unrelated to the needs and demands of society. It segregates the plan making from the decision making process with the consultants planning, the politicians deciding and the community receiving without being integrated into the planning and decision making process. The Scottish Planning system is undergoing radical changes as evidenced by the publication of the Planning Advice Note, PAN by the Scottish Executive in July 2006 with the aim of enabling Community Engagement that allow for openness and accountability in the decision making process. The Public Engagement is a process that is driven by the physical, social and economic systems research aimed at improving the process at the level of community through problem solving and of the city region through strategic planning. There are several methods available to engage the community in large scale projects. The two well known ones are the Enquiry be Design and the Charrette approaches used in the UK and US respectively. This paper is an independent and rigorous analysis of the Charrette process as observed in the proposed Tornagrain Settlement in the Highlands area of Scotland. It attempts to gauge and analyse the attitudes, perceptions of the participants the Charrette as well as the mechanics and structure of the Charrette. The study analyzes the Charrette approach as a method future public engagement in and its effectiveness within the Scottish Planning System in view of PAN 2005. The analysis revealed that the Charrette as a method of engagement could be effective in changing attitudes of the community to the design process under certain conditions as discussed in the paper.
Resumo:
The paper focuses on the development of an aircraft design optimization methodology that models uncertainty and sensitivity analysis in the tradeoff between manufacturing cost, structural requirements, andaircraft direct operating cost.Specifically,ratherthanonlylooking atmanufacturingcost, direct operatingcost is also consideredintermsof the impact of weight on fuel burn, in addition to the acquisition cost to be borne by the operator. Ultimately, there is a tradeoff between driving design according to minimal weight and driving it according to reduced manufacturing cost. Theanalysis of cost is facilitated withagenetic-causal cost-modeling methodology,andthe structural analysis is driven by numerical expressions of appropriate failure modes that use ESDU International reference data. However, a key contribution of the paper is to investigate the modeling of uncertainty and to perform a sensitivity analysis to investigate the robustness of the optimization methodology. Stochastic distributions are used to characterize manufacturing cost distributions, andMonteCarlo analysis is performed in modeling the impact of uncertainty on the cost modeling. The results are then used in a sensitivity analysis that incorporates the optimization methodology. In addition to investigating manufacturing cost variance, the sensitivity of the optimization to fuel burn cost and structural loading are also investigated. It is found that the consideration of manufacturing cost does make an impact and results in a different optimal design configuration from that delivered by the minimal-weight method. However, it was shown that at lower applied loads there is a threshold fuel burn cost at which the optimization process needs to reduce weight, and this threshold decreases with increasing load. The new optimal solution results in lower direct operating cost with a predicted savings of 640=m2 of fuselage skin over the life, relating to a rough order-of-magnitude direct operating cost savings of $500,000 for the fuselage alone of a small regional jet. Moreover, it was found through the uncertainty analysis that the principle was not sensitive to cost variance, although the margins do change.
Resumo:
This research investigated seepage under hydraulic structures considering flow through the banks of the canal. A computer model, utilizing the finite element method, was used. Different configurations of sheetpile driven under the floor of the structure were studied. Results showed that the transverse extension of sheetpile, driven at the middle of the floor, into the banks of the canal had very little effect on seepage losses, uplift force, and on the exit gradient at the downstream end of the floor. Likewise, confining the downstream floor with sheetpile from three sides was not found effective. When the downstream floor was confined with sheetpile from all sides, this has significantly reduced the exit gradient. Furthermore, all the different configurations of the sheetpile had insignificant effect on seepage losses. The most effective configuration of the sheetpile was the case when two rows of sheetpiles were driven at the middle and at the downstream end of the floor, with the latter sheetpile extended few meters into the banks of the canal. This case has significantly reduced the exit gradient and caused only slight increase in the uplift force when compared to other sheetpile configurations. The present study suggests that two-dimensional analysis of seepage problems underestimates the exit gradient and uplift force on hydraulic structures.
EVALUATION OF A FOAM BUFFER TARGET DESIGN FOR SPATIALLY UNIFORM ABLATION OF LASER-IRRADIATED PLASMAS
Resumo:
Experimental observations are presented demonstrating that the use of a gold-coated foam layer on the surface of a laser-driven target substantially reduces its hydrodynamic breakup during the acceleration phase. The data suggest that this results from enhanced thermal smoothing during the early-time imprint stage of the interaction. The target's kinetic energy and the level of parametric instability growth are shown to remain essentially unchanged from that of a conventionally driven target.