859 resultados para Design time
Resumo:
Diplomityö käsittelee hisseissä erikoistapauksessa käytettävän kulmakorin suunnittelua ja tuotteistamista. Työ suoritetaan KONE Oyj:lle. Diplomityössä luotiin kulmakorille modulaarinen tuotearkkitehtuuri ja määritettiin korin toimitusprosessi. Työn tavoitteena oli saavuttaa 48,12% asiakkaiden mahdollisista vaatimuksista ja vähentää suunnitteluun kuluvaa aikaa aikaisemmasta 24 tunnista neljään tuntiin. Työn tavoite saavutettiin kokeneen tapauskohtaisten kulmakorien suunnittelijan kommenttien perusteella. 48,12% asiakasvaatimuksista sisällytettiin tuotemalliin konfigurointimahdollisuuksina. Työn alussa on esitelty tuotesuunnittelua, laadun hallintaa, parametrista mallinnusta, massakustomointia ja tuotetiedon hallintaa. Sen jälkeen on käsitelty kulmakorin tuotteistamisen kannalta kaikki tärkeimmät muuttujat. Tämän jälkeen kulmakorin tuotemalli suunnitellaan ja mallinnetaan systemaattisesti ylhäältä-alas –mallinnustapaa käyttäen ja luodaan osille ja kokoonpanoille valmistuskuvat. Päätyökaluna työssä käytettiin Pro/ENGINEER-ohjelmistoa. Tällä mallinnettiin parametrinen tuotemalli ja rakenteiden lujuustarkastelussa käytettiin ohjelmistoa Ansys. Työn tavoite saavutettiin analysoimalla massakustomoinnin perusteiden olennaisimmat osat ja seuraamalla analyyttistä ja systemaattista tuotekehitysprosessia. Laatua painottaen tuotearkkitehtuuri validoitiin suorittamalla rajoitettu tuotanto, joka sisälsi kolme tuotemallilla konfiguroitua kulmakoria. Yksi koreista testikasattiin Hyvinkään tehtaalla.
Resumo:
Modulaarisen tuotteen kehittäminen tehostaa yrityksen kilpailukykyä ja helpottaa asiakkaiden tarpeiden tyydyttämistä. Tuoteperheen edut verrattaessa massatuotantoon ovat laajempi tuotevalikoima sekä parempi tuotettavuus massaräätälöintiin verrattuna. Eri tuotevaihtoehtoja ja moduuleja on mahdollista kehittää rinnakkain. Moduloitava tuoteperhe helpottaa yrityksen eri vaiheita aina tuotteen suunnittelusta huoltotoimenpiteisiin ja lopulta tuotteen purkamiseen. Asiakkaille tärkeitä hyötyjä moduloinnin osalta ovat tuotteiden parempi laatu ja huollettavuus. Täysin uuden modulaarisen tuoteperheen kehittäminen vaatii runsaasti resursseja suunnitteluosastolla. Modulaarisessa tuotteessa suunnittelutyö voidaan kohdistaa vain tietyn moduulin kohdalle ja suunnitteluaikoja saadaan täten lyhennettyä. Tässä kandidaatintyössä tutkittiin, miten hitsausautomaatiosovelluksissa modulaarisuus on toteutettu sekä pohditaan kehityskohteita, koska hitsausautomaatiosovelluksia tuotetaan runsaasti asiakasräätälöintinä. Tarkasteltavana tuoteperheenä oli robottihitsausportaalit ja hitsaustornit.
Resumo:
The development of interactive systems involves several professionals and the integration between them normally uses common artifacts, such as models, that drive the development process. In the model-driven development approach, the interaction model is an artifact that includes the most of the aspects related to what and how the user can do while he/she interacting with the system. Furthermore, the interactive model may be used to identify usability problems at design time. Therefore, the central problematic addressed by this thesis is twofold. In the first place, the interaction modeling, in a perspective that helps the designer to explicit to developer, who will implement the interface, the aspcts related to the interaction process. In the second place, the anticipated identification of usability problems, that aims to reduce the application final costs. To achieve these goals, this work presents (i) the ALaDIM language, that aims to help the designer on the conception, representation and validation of his interactive message models; (ii) the ALaDIM editor, which was built using the EMF (Eclipse Modeling Framework) and its standardized technologies by OMG (Object Management Group); and (iii) the ALaDIM inspection method, which allows the anticipated identification of usability problems using ALaDIM models. ALaDIM language and editor were respectively specified and implemented using the OMG standards and they can be used in MDA (Model Driven Architecture) activities. Beyond that, we evaluated both ALaDIM language and editor using a CDN (Cognitive Dimensions of Notations) analysis. Finally, this work reports an experiment that validated the ALaDIM inspection method
Resumo:
Self-adaptive software system is able to change its structure and/or behavior at runtime due to changes in their requirements, environment or components. One way to archieve self-adaptation is the use a sequence of actions (known as adaptation plans) which are typically defined at design time. This is the approach adopted by Cosmos - a Framework to support the configuration and management of resources in distributed environments. In order to deal with the variability inherent of self-adaptive systems, such as, the appearance of new components that allow the establishment of configurations that were not envisioned at development time, this dissertation aims to give Cosmos the capability of generating adaptation plans of runtime. In this way, it was necessary to perform a reengineering of the Cosmos Framework in order to allow its integration with a mechanism for the dynamic generation of adaptation plans. In this context, our work has been focused on conducting a reengineering of Cosmos. Among the changes made to in the Cosmos, we can highlight: changes in the metamodel used to represent components and applications, which has been redefined based on an architectural description language. These changes were propagated to the implementation of a new Cosmos prototype, which was then used for developing a case study application for purpose of proof of concept. Another effort undertaken was to make Cosmos more attractive by integrating it with another platform, in the case of this dissertation, the OSGi platform, which is well-known and accepted by the industry
Resumo:
One way to deal with the high complexity of current software systems is through selfadaptive systems. Self-adaptive system must be able to monitor themselves and their environment, analyzing the monitored data to determine the need for adaptation, decide how the adaptation will be performed, and finally, make the necessary adjustments. One way to perform the adaptation of a system is generating, at runtime, the process that will perform the adaptation. One advantage of this approach is the possibility to take into account features that can only be evaluated at runtime, such as the emergence of new components that allow new architectural arrangements which were not foreseen at design time. In this work we have as main objective the use of a framework for dynamic generation of processes to generate architectural adaptation plans on OSGi environment. Our main interest is evaluate how this framework for dynamic generation of processes behave in new environments
Resumo:
This work shows a project method proposed to design and build software components from the software functional m del up to assembly code level in a rigorous fashion. This method is based on the B method, which was developed with support and interest of British Petroleum (BP). One goal of this methodology is to contribute to solve an important problem, known as The Verifying Compiler. Besides, this work describes a formal model of Z80 microcontroller and a real system of petroleum area. To achieve this goal, the formal model of Z80 was developed and documented, as it is one key component for the verification upto the assembly level. In order to improve the mentioned methodology, it was applied on a petroleum production test system, which is presented in this work. Part of this technique is performed manually. However, almost of these activities can be automated by a specific compiler. To build such compiler, the formal modelling of microcontroller and modelling of production test system should provide relevant knowledge and experiences to the design of a new compiler. In ummary, this work should improve the viability of one of the most stringent criteria for formal verification: speeding up the verification process, reducing design time and increasing the quality and reliability of the product of the final software. All these qualities are very important for systems that involve serious risks or in need of a high confidence, which is very common in the petroleum industry
Resumo:
Once defined the relationship between the Starter Motor components and their functions, it is possible to develop a mathematical model capable to predict the Starter behavior during operation. One important aspect is the engagement system behavior. The development of a mathematical tool capable of predicting it is a valuable step in order to reduce the design time, cost and engineering efforts. A mathematical model, represented by differential equations, can be developed using physics laws, evaluating force balance and energy flow through the systems degrees of freedom. Another important physical aspect to be considered in this modeling is the impact conditions (particularly on the pinion and ring-gear contact). This work is a report of those equations application on available mathematical software and the resolution of those equations by Runge-Kutta's numerical integration method, in order to build an accessible engineering tool. Copyright © 2011 SAE International.
Resumo:
Currently one of the great concerns of the aeronautical industry is in relation to the security and integrity of the aircraft and its equipments / components when under critical flight maneuvers such as during landing / takeoff and emergency maneuvers. The engineers, technicians and scientists are constantly developing new techniques and theories to reduce the design time and testing, ir order to minimize costs. More and more the Finite Element Method is used in the structural analysis of a project as well as theories based on experimental results. This work aimed to estimate the critical load to failure for tensile, compression and buckling of the Tie-Rod, a fixture aircraft widely used on commercial aircrafts. The analysis was performed by finite element method with the assistance of software and by analytical calculations. The results showed that the Finite Element Method provides relative accuracy and convenience in the calculations, indicating critical load values slightly lower than those found analytically for tension and compression. For buckling, the Finite Element Method indicates a critical load very similar to that found analytically following empirical theories, while Euler's theory results in a slightly higher value. The highest risk is to fail by buckling, but the geometric irregularity of Tie-Rod pieces makes difficult the calculations, therefore a practical test must be done before validation of the results
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
Objective: To investigate hemodynamic responses to lateral rotation. ^ Design: Time-series within a randomized controlled trial pilot study. ^ Setting: A medical intensive care unit (ICU) and a medical-surgical ICU in two tertiary care hospitals. ^ Patients: Adult patients receiving mechanical ventilation. ^ Interventions: Two-hourly manual or continuous automated lateral rotation. ^ Measurements and Main Results: Heart rate (HR) and arterial pressure were sampled every 6 seconds for > 24 hours, and pulse pressure (PP) was computed. Turn data were obtained from a turning flow sheet (manual turn) or with an angle sensor (automated turn). Within-subject ensemble averages were computed for HR, mean arterial pressure (MAP), and PP across turns. Sixteen patients were randomized to either the manual (n = 8) or automated (n = 8) turn. Three patients did not complete the study due to hemodynamic instability, bed malfunction or extubation, leaving 13 patients (n = 6 manual turn and n = 7 automated turn) for analysis. Seven patients (54%) had an arterial line. Changes in hemodynamic variables were statistically significant increases ( p < .05), but few changes were clinically important, defined as ≥ 10 bpm (HR) or ≥ 10 mmHg (MAP and PP), and were observed only in the manual-turn group. All manual-turn patients had prolonged recovery to baseline in HR, MAP and PP of up to 45 minutes (p ≤ .05). No significant turning-related periodicities were found for HR, MAP, or PP. Cross-correlations between variables showed variable lead-lag relations in both groups. A statistically, but not clinically, significant increase in HR of 3 bpm was found for the manual-turn group in the back compared with the right lateral position ( F = 14.37, df = 1, 11, p = .003). ^ Conclusions: Mechanically ventilated critically ill patients experience modest hemodynamic changes with manual lateral rotation. A clinically inconsequential increase in HR, MAP, and PP may persist for up to 45 minutes. Automated lateral rotation has negligible hemodynamic effects. ^
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach.