863 resultados para set based design
Resumo:
Motivation: Influenza A viral heterogeneity remains a significant threat due to unpredictable antigenic drift in seasonal influenza and antigenic shifts caused by the emergence of novel subtypes. Annual review of multivalent influenza vaccines targets strains of influenza A and B likely to be predominant in future influenza seasons. This does not induce broad, cross protective immunity against emergent subtypes. Better strategies are needed to prevent future pandemics. Cross-protection can be achieved by activating CD8+ and CD4+ T cells against highly-conserved regions of the influenza genome. We combine available experimental data with informatics-based immunological predictions to help design vaccines potentially able to induce cross-protective T-cells against multiple influenza subtypes. Results: To exemplify our approach we designed two epitope ensemble vaccines comprising highly-conserved and experimentally-verified immunogenic influenza A epitopes as putative non-seasonal influenza vaccines; one specifically targets the US population and the other is a universal vaccine. The USA-specific vaccine comprised 6 CD8+ T cell epitopes (GILGFVFTL, FMYSDFHFI, GMDPRMCSL, SVKEKDMTK, FYIQMCTEL, DTVNRTHQY) and 3 CD4+ epitopes (KGILGFVFTLTVPSE, EYIMKGVYINTALLN, ILGFVFTLTVPSERG). The universal vaccine comprised 8 CD8+ epitopes: (FMYSDFHFI, GILGFVFTL, ILRGSVAHK, FYIQMCTEL, ILKGKFQTA, YYLEKANKI, VSDGGPNLY, YSHGTGTGY) and the same 3 CD4+ epitopes. Our USA-specific vaccine has a population protection coverage (portion of the population potentially responsive to one or more component epitopes of the vaccine, PPC) of over 96% and 95% coverage of observed influenza subtypes. The universal vaccine has a PPC value of over 97% and 88% coverage of observed subtypes.
Resumo:
Modern manufacturing systems should satisfy emerging needs related to sustainable development. The design of sustainable manufacturing systems can be valuably supported by simulation, traditionally employed mainly for time and cost reduction. In this paper, a multi-purpose digital simulation approach is proposed to deal with sustainable manufacturing systems design through Discrete Event Simulation (DES) and 3D digital human modelling. DES models integrated with data on power consumption of the manufacturing equipment are utilized to simulate different scenarios with the aim to improve productivity as well as energy efficiency, avoiding resource and energy waste. 3D simulation based on digital human modelling is employed to assess human factors issues related to ergonomics and safety of manufacturing systems. The approach is implemented for the sustainability enhancement of a real manufacturing cell of the aerospace industry, automated by robotic deburring. Alternative scenarios are proposed and simulated, obtaining a significant improvement in terms of energy efficiency (−87%) for the new deburring cell, and a reduction of energy consumption around −69% for the coordinate measuring machine, with high potential annual energy cost savings and increased energy efficiency. Moreover, the simulation-based ergonomic assessment of human operator postures allows 25% improvement of the workcell ergonomic index.
Resumo:
Le dimensionnement basé sur la performance (DBP), dans une approche déterministe, caractérise les objectifs de performance par rapport aux niveaux de performance souhaités. Les objectifs de performance sont alors associés à l'état d'endommagement et au niveau de risque sismique établis. Malgré cette approche rationnelle, son application est encore difficile. De ce fait, des outils fiables pour la capture de l'évolution, de la distribution et de la quantification de l'endommagement sont nécessaires. De plus, tous les phénomènes liés à la non-linéarité (matériaux et déformations) doivent également être pris en considération. Ainsi, cette recherche montre comment la mécanique de l'endommagement pourrait contribuer à résoudre cette problématique avec une adaptation de la théorie du champ de compression modifiée et d'autres théories complémentaires. La formulation proposée adaptée pour des charges monotones, cycliques et de type pushover permet de considérer les effets non linéaires liés au cisaillement couplé avec les mécanismes de flexion et de charge axiale. Cette formulation est spécialement appliquée à l'analyse non linéaire des éléments structuraux en béton soumis aux effets de cisaillement non égligeables. Cette nouvelle approche mise en œuvre dans EfiCoS (programme d'éléments finis basé sur la mécanique de l'endommagement), y compris les critères de modélisation, sont également présentés ici. Des calibrations de cette nouvelle approche en comparant les prédictions avec des données expérimentales ont été réalisées pour les murs de refend en béton armé ainsi que pour des poutres et des piliers de pont où les effets de cisaillement doivent être pris en considération. Cette nouvelle version améliorée du logiciel EFiCoS a démontrée être capable d'évaluer avec précision les paramètres associés à la performance globale tels que les déplacements, la résistance du système, les effets liés à la réponse cyclique et la quantification, l'évolution et la distribution de l'endommagement. Des résultats remarquables ont également été obtenus en référence à la détection appropriée des états limites d'ingénierie tels que la fissuration, les déformations unitaires, l'éclatement de l'enrobage, l'écrasement du noyau, la plastification locale des barres d'armature et la dégradation du système, entre autres. Comme un outil pratique d'application du DBP, des relations entre les indices d'endommagement prédits et les niveaux de performance ont été obtenus et exprimés sous forme de graphiques et de tableaux. Ces graphiques ont été développés en fonction du déplacement relatif et de la ductilité de déplacement. Un tableau particulier a été développé pour relier les états limites d'ingénierie, l'endommagement, le déplacement relatif et les niveaux de performance traditionnels. Les résultats ont démontré une excellente correspondance avec les données expérimentales, faisant de la formulation proposée et de la nouvelle version d'EfiCoS des outils puissants pour l'application de la méthodologie du DBP, dans une approche déterministe.
Resumo:
Resumo:
199 p.
Resumo:
Motivation: Influenza A viral heterogeneity remains a significant threat due to unpredictable antigenic drift in seasonal influenza and antigenic shifts caused by the emergence of novel subtypes. Annual review of multivalent influenza vaccines targets strains of influenza A and B likely to be predominant in future influenza seasons. This does not induce broad, cross protective immunity against emergent subtypes. Better strategies are needed to prevent future pandemics. Cross-protection can be achieved by activating CD8+ and CD4+ T cells against highly-conserved regions of the influenza genome. We combine available experimental data with informatics-based immunological predictions to help design vaccines potentially able to induce cross-protective T-cells against multiple influenza subtypes. Results: To exemplify our approach we designed two epitope ensemble vaccines comprising highlyconserved and experimentally-verified immunogenic influenza A epitopes as putative non-seasonal influenza vaccines; one specifically targets the US population and the other is a universal vaccine. The USA-specific vaccine comprised 6 CD8+ T cell epitopes (GILGFVFTL, FMYSDFHFI, GMDPRMCSL, SVKEKDMTK, FYIQMCTEL, DTVNRTHQY) and 3 CD4+ epitopes (KGILGFVFTLTVPSE, EYIMKGVYINTALLN, ILGFVFTLTVPSERG). The universal vaccine comprised 8 CD8+ epitopes: (FMYSDFHFI, GILGFVFTL, ILRGSVAHK, FYIQMCTEL, ILKGKFQTA, YYLEKANKI, VSDGGPNLY, YSHGTGTGY) and the same 3 CD4+ epitopes. Our USA-specific vaccine has a population protection coverage (portion of the population potentially responsive to one or more component epitopes of the vaccine, PPC) of over 96% and 95% coverage of observed influenza subtypes. The universal vaccine has a PPC value of over 97% and 88% coverage of observed subtypes.
Resumo:
Traditional engineering design methods are based on Simon's (1969) use of the concept function, and as such collectively suffer from both theoretical and practical shortcomings. Researchers in the field of affordance-based design have borrowed from ecological psychology in an attempt to address the blind spots of function-based design, developing alternative ontologies and design processes. This dissertation presents function and affordance theory as both compatible and complimentary. We first present a hybrid approach to design for technology change, followed by a reconciliation and integration of function and affordance ontologies for use in design. We explore the integration of a standard function-based design method with an affordance-based design method, and demonstrate how affordance theory can guide the early application of function-based design. Finally, we discuss the practical and philosophical ramifications of embracing affordance theory's roots in ecology and ecological psychology, and explore the insights and opportunities made possible by an ecological approach to engineering design. The primary contribution of this research is the development of an integrated ontology for describing and designing technological systems using both function- and affordance-based methods.
Resumo:
In recent years, Business Model Canvas design has evolved from being a paper-based activity to one that involves the use of dedicated computer-aided business model design tools. We propose a set of guidelines to help design more coherent business models. When combined with functionalities offered by CAD tools, they show great potential to improve business model design as an ongoing activity. However, in order to create complex solutions, it is necessary to compare basic business model design tasks, using a CAD system over its paper-based counterpart. To this end, we carried out an experiment to measure user perceptions of both solutions. Performance was evaluated by applying our guidelines to both solutions and then carrying out a comparison of business model designs. Although CAD did not outperform paper-based design, the results are very encouraging for the future of computer-aided business model design.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
Variations of manufacturing process parameters and environmental aspects may affect the quality and performance of composite materials, which consequently affects their structural behaviour. Reliability-based design optimisation (RBDO) and robust design optimisation (RDO) searches for safe structural systems with minimal variability of response when subjected to uncertainties in material design parameters. An approach that simultaneously considers reliability and robustness is proposed in this paper. Depending on a given reliability index imposed on composite structures, a trade-off is established between the performance targets and robustness. Robustness is expressed in terms of the coefficient of variation of the constrained structural response weighted by its nominal value. The Pareto normed front is built and the nearest point to the origin is estimated as the best solution of the bi-objective optimisation problem.
Resumo:
The main objective of this study was to evaluate the hydraulic performance of riprap spurs and weirs in controlling bank erosion at the Southern part of the Raccoon River upstream U.S. Highway 169 Bridge utilizing the commercially available model FESWMS and field monitoring. It was found based on a 2 year monitoring and numerical modeling that the design of structures was overall successful, including their spacing and stability. The riprap material incorporated into the structures was directly and favorably correlated to the flow transmission through the structure, or in other words, dictated the permeable nature of the structure. It was found that the permeable dikes and weirs chosen in this study created less volume of scour in the vicinity of the structure toes and thus have less risk comparatively to other impermeable structures to collapse. The fact that the structures permitted the transmission of flow through them it allowed fine sand particles to fill in the gaps of the rock interstices and thus cement and better stabilize the structures. During bank-full flows the maximum scour hole was recorded away from the structures toe and the scourhole size was directly related to the protrusion angle of the structure to the flow. It was concluded that the proposed structure inclination with respect to the main flow direction was appropriate since it provides maximum bank protection while creating the largest volume of local scour away from the structure and towards the center of the channel. Furthermore, the lowest potential for bank erosion also occurs with the present set-up design chosen by the IDOT. About 2 ft of new material was deposited in the area located between the structures for the period extending from the construction day to May 2007. Surveys obtained by sonar and the presence of vegetation indicate that new material has been added at the bank toes. Finally, the structures provided higher variability in bed topography forming resting pools, creating flow shade on the leeward side of the structure, and separation of bed substrate due to different flow conditions. Another notable environmental benefit to rock riprap weirs and dikes is the creation of resting pools, especially in year 2007 (2nd year of the project). The magnitude of these benefits to aquatic habitat has been found in the literature that is directly related to the induced scour-hole volume.
Resumo:
Tehoelektoniikkalaitteella tarkoitetaan ohjaus- ja säätöjärjestelmää, jolla sähköä muokataan saatavilla olevasta muodosta haluttuun uuteen muotoon ja samalla hallitaan sähköisen tehon virtausta lähteestä käyttökohteeseen. Tämä siis eroaa signaalielektroniikasta, jossa sähköllä tyypillisesti siirretään tietoa hyödyntäen eri tiloja. Tehoelektroniikkalaitteita vertailtaessa katsotaan yleensä niiden luotettavuutta, kokoa, tehokkuutta, säätötarkkuutta ja tietysti hintaa. Tyypillisiä tehoelektroniikkalaitteita ovat taajuudenmuuttajat, UPS (Uninterruptible Power Supply) -laitteet, hitsauskoneet, induktiokuumentimet sekä erilaiset teholähteet. Perinteisesti näiden laitteiden ohjaus toteutetaan käyttäen mikroprosessoreja, ASIC- (Application Specific Integrated Circuit) tai IC (Intergrated Circuit) -piirejä sekä analogisia säätimiä. Tässä tutkimuksessa on analysoitu FPGA (Field Programmable Gate Array) -piirien soveltuvuutta tehoelektroniikan ohjaukseen. FPGA-piirien rakenne muodostuu erilaisista loogisista elementeistä ja niiden välisistä yhdysjohdoista.Loogiset elementit ovat porttipiirejä ja kiikkuja. Yhdysjohdot ja loogiset elementit ovat piirissä kiinteitä eikä koostumusta tai lukumäärää voi jälkikäteen muuttaa. Ohjelmoitavuus syntyy elementtien välisistä liitännöistä. Piirissä on lukuisia, jopa miljoonia kytkimiä, joiden asento voidaan asettaa. Siten piirin peruselementeistä voidaan muodostaa lukematon määrä erilaisia toiminnallisia kokonaisuuksia. FPGA-piirejä on pitkään käytetty kommunikointialan tuotteissa ja siksi niiden kehitys on viime vuosina ollut nopeaa. Samalla hinnat ovat pudonneet. Tästä johtuen FPGA-piiristä on tullut kiinnostava vaihtoehto myös tehoelektroniikkalaitteiden ohjaukseen. Väitöstyössä FPGA-piirien käytön soveltuvuutta on tutkittu käyttäen kahta vaativaa ja erilaista käytännön tehoelektroniikkalaitetta: taajuudenmuuttajaa ja hitsauskonetta. Molempiin testikohteisiin rakennettiin alan suomalaisten teollisuusyritysten kanssa soveltuvat prototyypit,joiden ohjauselektroniikka muutettiin FPGA-pohjaiseksi. Lisäksi kehitettiin tätä uutta tekniikkaa hyödyntävät uudentyyppiset ohjausmenetelmät. Prototyyppien toimivuutta verrattiin vastaaviin perinteisillä menetelmillä ohjattuihin kaupallisiin tuotteisiin ja havaittiin FPGA-piirien mahdollistaman rinnakkaisen laskennantuomat edut molempien tehoelektroniikkalaitteiden toimivuudessa. Työssä on myösesitetty uusia menetelmiä ja työkaluja FPGA-pohjaisen säätöjärjestelmän kehitykseen ja testaukseen. Esitetyillä menetelmillä tuotteiden kehitys saadaan mahdollisimman nopeaksi ja tehokkaaksi. Lisäksi työssä on kehitetty FPGA:n sisäinen ohjaus- ja kommunikointiväylärakenne, joka palvelee tehoelektroniikkalaitteiden ohjaussovelluksia. Uusi kommunikointirakenne edistää lisäksi jo tehtyjen osajärjestelmien uudelleen käytettävyyttä tulevissa sovelluksissa ja tuotesukupolvissa.
Resumo:
This thesis evaluates methods for obtaining high performance in applications running on the mobile Java platform. Based on the evaluated methods, an optimization was done to a Java extension API running on top the Symbian operating system. The API provides location-based services for mobile Java applications. As a part of this thesis, the JNI implementation in Symbian OS was also benchmarked. A benchmarking tool was implemented in the analysis phase in order to implement extensive performance test set. Based on the benchmark results, it was noted that the landmarks implementation of the API was performing very slowly with large amounts of data. The existing implementation proved to be very inconvenient for optimization because the early implementers did not take performance and design issues into consideration. A completely new architecture was implemented for the API in order to provide scalable landmark initialization and data extraction by using lazy initialization methods. Additionally, runtime memory consumption was also an important part of the optimization. The improvement proved to be very efficient based on the measurements after the optimization. Most of the common API use cases performed extremely well compared to the old implementation. Performance optimization is an important quality attribute of any piece of software especially in embedded mobile devices. Typically, projects get into trouble with performance because there are no clear performance targets and knowledge how to achieve them. Well-known guidelines and performance models help to achieve good overall performance in Java applications and programming interfaces.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.