883 resultados para constraint based design
Resumo:
Software applications created on top of the service-oriented architecture (SOA) are increasingly popular but testing them remains a challenge. In this paper a framework named TASSA for testing the functional and non-functional behaviour of service-based applications is presented. The paper focuses on the concept of design time testing, the corresponding testing approach and architectural integration of the consisting TASSA tools. The individual TASSA tools with sample validation scenarios were already presented with a general view of their relation. This paper’s contribution is the structured testing approach, based on the integral use of the tools and their architectural integration. The framework is based on SOA principles and is composable depending on user requirements.
Resumo:
ACM Computing Classification System (1998): K.3.1, K.3.2.
Resumo:
Knitwear design is a creative activity that is hard to automate using the computer. The production of the associated knitting pattern, however, is repetitive, time-consuming and error-prone, calling for automation. Our objectives are two-fold: To facilitate the design and to ease the burden of calculations and checks in pattern production. We conduct a feasibility study for applying case-based reasoning in knitwear design: We describe appropriate methods and show how they can be implemented. © Cranfield University 2009.
Resumo:
Design verification in the digital domain, using model-based principles, is a key research objective to address the industrial requirement for reduced physical testing and prototyping. For complex assemblies, the verification of design and the associated production methods is currently fragmented, prolonged and sub-optimal, as it uses digital and physical verification stages that are deployed in a sequential manner using multiple systems. This paper describes a novel, hybrid design verification methodology that integrates model-based variability analysis with measurement data of assemblies, in order to reduce simulation uncertainty and allow early design verification from the perspective of satisfying key assembly criteria.
Resumo:
The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.
Resumo:
This study identifies and investigates the potential use of in-eye trigger mechanisms to supplement the widely available information on release of ophthalmic drugs from contact lenses under passive release conditions. Ophthalmic dyes and surrogates have been successfully employed to investigate how these factors can be drawn together to make a successful system. The storage of a drug-containing lens in a pH lower than that of the ocular environment can be used to establish an equilibrium that favours retention of the drug in the lens prior to ocular insertion. Although release under passive conditions does not result in complete dye elution, the use of mechanical agitation techniques which mimic the eyelid blink action in conjunction with ocular tear chemistry promotes further release. In this way differentiation between passive and triggered in vitro release characteristics can be established. Investigation of the role of individual tear proteins revealed significant differences in their ability to alter the equilibrium between matrix-held and eluate-held dye or drug. These individual experiments were then investigated in vivo using ophthalmic dyes. Complete elution was found to be achievable in-eye; this demonstrated the importance of that fraction of the drug retained under passive conditions and the triggering effect of in-eye conditions on the release process. Understanding both the structure-property relationship between drug and material and in-eye trigger mechanisms, using ophthalmic dyes as a surrogate, provides the basis of knowledge necessary to design ocular drug delivery vehicles for in-eye release in a controllable manner.
Resumo:
This study explores the ongoing pedagogical development of a number of undergraduate design and engineering programmes in the United Kingdom. Observations and data have been collected over several cohorts to bring a valuable perspective to the approaches piloted across two similar university departments while trialling a number of innovative learning strategies. In addition to the concurrent institutional studies the work explores curriculum design that applies the principles of Co-Design, multidisciplinary and trans disciplinary learning, with both engineering and product design students working alongside each other through a practical problem solving learning approach known as the CDIO learning initiative (Conceive, Design Implement and Operate) [1]. The study builds on previous work presented at the 2010 EPDE conference: The Effect of Personality on the Design Team: Lessons from Industry for Design Education [2]. The subsequent work presented in this paper applies the findings to mixed design and engineering team based learning, building on the insight gained through a number of industrial process case studies carried out in current design practice. Developments in delivery also aligning the CDIO principles of learning through doing into a practice based, collaborative learning experience and include elements of the TRIZ creative problem solving technique [3]. The paper will outline case studies involving a number of mixed engineering and design student projects that highlight the CDIO principles, combined with an external industrial design brief. It will compare and contrast the learning experience with that of a KTP derived student project, to examine an industry based model for student projects. In addition key areas of best practice will be presented, and student work from each mode will be discussed at the conference.
Resumo:
PurposeTo develop and validate a classification system for focal vitreomacular traction (VMT) with and without macular hole based on spectral domain optical coherence tomography (SD-OCT), intended to aid in decision-making and prognostication.MethodsA panel of retinal specialists convened to develop this system. A literature review followed by discussion on a wide range of cases formed the basis for the proposed classification. Key features on OCT were identified and analysed for their utility in clinical practice. A final classification was devised based on two sequential, independent validation exercises to improve interobserver variability.ResultsThis classification tool pertains to idiopathic focal VMT assessed by a horizontal line scan using SD-OCT. The system uses width (W), interface features (I), foveal shape (S), retinal pigment epithelial changes (P), elevation of vitreous attachment (E), and inner and outer retinal changes (R) to give the acronym WISPERR. Each category is scored hierarchically. Results from the second independent validation exercise indicated a high level of agreement between graders: intraclass correlation ranged from 0.84 to 0.99 for continuous variables and Fleiss' kappa values ranged from 0.76 to 0.95 for categorical variables.ConclusionsWe present an OCT-based classification system for focal VMT that allows anatomical detail to be scrutinised and scored qualitatively and quantitatively using a simple, pragmatic algorithm, which may be of value in clinical practice as well as in future research studies.
Resumo:
The automotive industry combines a multitude of professionals to develop a modern car successfully. Within the design and development teams the collaboration and interface between Engineers and Designers is critical to ensure design intent is communicated and maintained throughout the development process. This study highlights recent industry practice with the emergence of Concept Engineers in design teams at Jaguar Land Rover Automotive group. The role of the Concept Engineer emphasises the importance of the Engineering and Design/Styling interface with the Concept engineer able to interact and understand the challenges and specific languages of each specialist area, hence improving efficiency and communication within the design team. Automotive education tends to approach design from two distinct directions, that of engineering design through BSc courses or a more styling design approach through BA and BDes routes. The educational challenge for both types of course is to develop engineers and stylist's who have greater understanding and experience of each other's specialist perspective of design and development. The study gives examples of two such courses in the UK who are developing programmes to help students widen their understanding of the engineering and design spectrum. Initial results suggest the practical approach has been well received by students and encouraged by industry as they seek graduates with specialist knowledge but also a wider appreciation of their role within the design process.
Resumo:
This paper presents a surrogate-model-based optimization of a doubly-fed induction generator (DFIG) machine winding design for maximizing power yield. Based on site-specific wind profile data and the machine's previous operational performance, the DFIG's stator and rotor windings are optimized to match the maximum efficiency with operating conditions for rewinding purposes. The particle swarm optimization-based surrogate optimization techniques are used in conjunction with the finite element method to optimize the machine design utilizing the limited available information for the site-specific wind profile and generator operating conditions. A response surface method in the surrogate model is developed to formulate the design objectives and constraints. Besides, the machine tests and efficiency calculations follow IEEE standard 112-B. Numerical and experimental results validate the effectiveness of the proposed technologies.
Resumo:
Heat sinks are widely used for cooling electronic devices and systems. Their thermal performance is usually determined by the material, shape, and size of the heat sink. With the assistance of computational fluid dynamics (CFD) and surrogate-based optimization, heat sinks can be designed and optimized to achieve a high level of performance. In this paper, the design and optimization of a plate-fin-type heat sink cooled by impingement jet is presented. The flow and thermal fields are simulated using the CFD simulation; the thermal resistance of the heat sink is then estimated. A Kriging surrogate model is developed to approximate the objective function (thermal resistance) as a function of design variables. Surrogate-based optimization is implemented by adaptively adding infill points based on an integrated strategy of the minimum value, the maximum mean square error approach, and the expected improvement approaches. The results show the influence of design variables on the thermal resistance and give the optimal heat sink with lowest thermal resistance for given jet impingement conditions.
Resumo:
Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
The main objective for physics based modeling of the power converter components is to design the whole converter with respect to physical and operational constraints. Therefore, all the elements and components of the energy conversion system are modeled numerically and combined together to achieve the whole system behavioral model. Previously proposed high frequency (HF) models of power converters are based on circuit models that are only related to the parasitic inner parameters of the power devices and the connections between the components. This dissertation aims to obtain appropriate physics-based models for power conversion systems, which not only can represent the steady state behavior of the components, but also can predict their high frequency characteristics. The developed physics-based model would represent the physical device with a high level of accuracy in predicting its operating condition. The proposed physics-based model enables us to accurately develop components such as; effective EMI filters, switching algorithms and circuit topologies [7]. One of the applications of the developed modeling technique is design of new sets of topologies for high-frequency, high efficiency converters for variable speed drives. The main advantage of the modeling method, presented in this dissertation, is the practical design of an inverter for high power applications with the ability to overcome the blocking voltage limitations of available power semiconductor devices. Another advantage is selection of the best matching topology with inherent reduction of switching losses which can be utilized to improve the overall efficiency. The physics-based modeling approach, in this dissertation, makes it possible to design any power electronic conversion system to meet electromagnetic standards and design constraints. This includes physical characteristics such as; decreasing the size and weight of the package, optimized interactions with the neighboring components and higher power density. In addition, the electromagnetic behaviors and signatures can be evaluated including the study of conducted and radiated EMI interactions in addition to the design of attenuation measures and enclosures.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.