935 resultados para Polygonal faults


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tässä tutkielmassa tutkittiin taloushallinnon eettisiä ongelmakohtia käyttäen aineistona verotarkastuskertomuksia. Tutkielman tavoitteena oli selvittää onko taloushallinnon eettisissä ratkaisuissa kyse kirjanpitäjän vai yrittäjän etiikasta, voidaanko verotarkastuksella havaitut virheet jaotella tahallisiin ja tahattomiin ja voidaanko näistä virheistä tehdä johtopäätöksiä veronmaksumyönteisyydestä. Kirjanpitäjän ja yrittäjän etiikan havaittiin ilmenevän eri tavoilla. Jos etiikkaa pohdittiin veronmaksuhalukkuuden merkityksessä, niin tällöin kyse oli yrittäjän etiikasta. Verotarkastuksilla havaitut virheet jaoteltiin virhetyyppeihin, joita käytettiin veronmaksumyönteisyyden analysointiin. Tutkielmassa käytettyjä virhetyyppejä olivat muun muassa luontois- ja henkilökuntaedut, kustannusten korvaukset ja edustusmenot. Veronmaksumyönteisyydessä havaittiin selviä eroja eri virhetyyppien välillä. Suurin osa virhetyypeistä kuului veronmaksumyönteisyyden perusteella joukkoon, jossa lakeja noudatetaan vain valvonnan alaisena. Näitä virheitä tekevien kohdalla verovalvonnan kattavuudella saavutetaan parhaat tulokset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract A 4-year-old female captive-bred snake of the genus Bothrops showed swelling on the left side of the oral cavity, suggesting the development of neoplasia. The mass was removed surgically and sent for pathological examination. Two months later a new increase in volume in the same site was observed, suggesting recurrence. The lesion was completely removed and sent for pathological analysis. Histologically, the two-samples consisted of a mass with highly-cell density composed of spindle-shaped anaplastic cells arranged in interwoven bundles, distributed throughout the tissue extension and, occasionally, polygonal cells arranged in irregular fascicles. The Masson trichrome staining showed modest amount of collagen supporting the neoplastic cells. PAS-positive content was not observed in the cytoplasm of neoplastic cells. Histological and histochemical findings indicated that it was a spindle cell neoplasm, but the classification was not possible. Immunohistochemistry was requested and performed using the streptavidin-biotin-peroxidase method. The markers used were anti-vimentin, anti-PCNA, anti-EMA, anti-melan A and anti-melanosome, anti-desmin, anti-actin, anti-CD68 and anti- S100protein. The neoplastic cells were immunoreactive for vimentin and PCNA and negative for the other antibodies. The morphology characterization, histochemical and immunohistochemical analysis of neoplastic cells allowed the definitive diagnosis of oral fibrosarcoma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli tutkia, miten UPM:n Kymin tehtaan A4- arkittamon kunnossapitotoimintaa pystyttäisiin kehittämään niin, että tuotantolaitoksen kokonaistehokkuutta saataisiin kasvatettua. Työssä on pohdittu keinoja nykyisen, pääasiallisesti korjaavan kunnossapitotoiminnan muuttamiseksi suunnitelmalliseksi eri teorialähtökohtien pohjalta. Keskeinen teoria, johon työn tulokset pohjautuvat, on kokonaisvaltainen tuottava kunnossapito (TPM). Kirjallisuuslähteisiin tukeutuvaa tutkimusta täydensivät useat haastattelut, A4-valmistajille suunnattu laitteiden kuntokartoituskysely ja tuotannon- ja toiminnanohjausjärjestelmistä kerätty mittausaineisto. Työn merkittävimmät tulokset osoittavat, että A4-arkittamon kunnossapitotoimintaa tulisi suunnata nykyistä enemmän käyttökunnossapitolähtöiseksi. Käyttökunnossapito perustuu tuotannon operaattoreiden osallistumiseen kunnossapitotoimiin varsinaisen kunnossapitohenkilöstön ohella. Tuotantooperaattoreilla on työnsä vuoksi parhaat mahdollisuudet käyttämiensä laitteiden kunnonvalvontaan ja näin ollen mahdollisten vikaantumisten ennakointiin jo varhaisessa vaiheessa. Ennakoimalla pystyttäisiin parantamaan kunnossapidon suunnitelmallisuutta, minkä ansiosta laitteiden käytettävyys ja sitä kautta tuotantolaitoksen kokonaistehokkuus olisivat nykyistä korkeampia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työssä tarkasteltiin sähköisiä tarkastusmenetelmiä oikosulkumoottorin ennaltaehkäisevälle kunnonvalvonnalle sekä näiden menetelmien toimivuutta eri vikojen havaitsemiseen. Työ to-teutettiin Porvoon Energian Tolkkisten voimalaitoksella ja se toimii samalla tarkastusohjeena työssä esitetyille tarkastusmenetelmille. Käytetyillä tarkastusmenetelmillä kyettiin havaitse-maan osa vioista ja niitä voidaan käyttää osana ennaltaehkäisevää kunnonvalvontaa. Kaikkia vikoja ei kuitenkaan työssä esitetyillä menetelmillä voitu havaita.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The bedrock of old crystalline cratons is characteristically saturated with brittle structures formed during successive superimposed episodes of deformation and under varying stress regimes. As a result, the crust effectively deforms through the reactivation of pre-existing structures rather than by through the activation, or generation, of new ones, and is said to be in a state of 'structural maturity'. By combining data from Olkiluoto Island, southwestern Finland, which has been investigated as the potential site of a deep geological repository for high-level nuclear waste, with observations from southern Sweden, it can be concluded that the southern part of the Svecofennian shield had already attained structural maturity during the Mesoproterozoic era. This indicates that the phase of activation of the crust, i.e. the time interval during which new fractures were generated, was brief in comparison to the subsequent reactivation phase. Structural maturity of the bedrock was also attained relatively rapidly in Namaqualand, western South Africa, after the formation of first brittle structures during Neoproterozoic time. Subsequent brittle deformation in Namaqualand was controlled by the reactivation of pre-existing strike-slip faults.In such settings, seismic events are likely to occur through reactivation of pre-existing zones that are favourably oriented with respect to prevailing stresses. In Namaqualand, this is shown for present day seismicity by slip tendency analysis, and at Olkiluoto, for a Neoproterozoic earthquake reactivating a Mesoproterozoic fault. By combining detailed field observations with the results of paleostress inversions and relative and absolute time constraints, seven distinctm superimposed paleostress regimes have been recognized in the Olkiluoto region. From oldest to youngest these are: (1) NW-SE to NNW-SSE transpression, which prevailed soon after 1.75 Ga, when the crust had sufficiently cooled down to allow brittle deformation to occur. During this phase conjugate NNW-SSE and NE-SW striking strike-slip faults were active simultaneous with reactivation of SE-dipping low-angle shear zones and foliation planes. This was followed by (2) N-S to NE-SW transpression, which caused partial reactivation of structures formed in the first event; (3) NW-SE extension during the Gothian orogeny and at the time of rapakivi magmatism and intrusion of diabase dikes; (4) NE-SW transtension that occurred between 1.60 and 1.30 Ga and which also formed the NW-SE-trending Satakunta graben located some 20 km north of Olkiluoto. Greisen-type veins also formed during this phase. (5) NE-SW compression that postdates both the formation of the 1.56 Ga rapakivi granites and 1.27 Ga olivine diabases of the region; (6) E-W transpression during the early stages of the Mesoproterozoic Sveconorwegian orogeny and which also predated (7) almost coaxial E-W extension attributed to the collapse of the Sveconorwegian orogeny. The kinematic analysis of fracture systems in crystalline bedrock also provides a robust framework for evaluating fluid-rock interaction in the brittle regime; this is essential in assessment of bedrock integrity for numerous geo-engineering applications, including groundwater management, transient or permanent CO2 storage and site investigations for permanent waste disposal. Investigations at Olkiluoto revealed that fluid flow along fractures is coupled with low normal tractions due to in-situ stresses and thus deviates from the generally accepted critically stressed fracture concept, where fluid flow is concentrated on fractures on the verge of failure. The difference is linked to the shallow conditions of Olkiluoto - due to the low differential stresses inherent at shallow depths, fracture activation and fluid flow is controlled by dilation due to low normal tractions. At deeper settings, however, fluid flow is controlled by fracture criticality caused by large differential stress, which drives shear deformation instead of dilation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Permanent magnet synchronous machines (PMSM) have become widely used in applications because of high efficiency compared to synchronous machines with exciting winding or to induction motors. This feature of PMSM is achieved through the using the permanent magnets (PM) as the main excitation source. The magnetic properties of the PM have significant influence on all the PMSM characteristics. Recent observations of the PM material properties when used in rotating machines revealed that in all PMSMs the magnets do not necessarily operate in the second quadrant of the demagnetization curve which makes the magnets prone to hysteresis losses. Moreover, still no good analytical approach has not been derived for the magnetic flux density distribution along the PM during the different short circuits faults. The main task of this thesis is to derive simple analytical tool which can predict magnetic flux density distribution along the rotor-surface mounted PM in two cases: during normal operating mode and in the worst moment of time from the PM’s point of view of the three phase symmetrical short circuit. The surface mounted PMSMs were selected because of their prevalence and relatively simple construction. The proposed model is based on the combination of two theories: the theory of the magnetic circuit and space vector theory. The comparison of the results in case of the normal operating mode obtained from finite element software with the results calculated with the proposed model shows good accuracy of model in the parts of the PM which are most of all prone to hysteresis losses. The comparison of the results for three phase symmetrical short circuit revealed significant inaccuracy of the proposed model compared with results from finite element software. The analysis of the inaccuracy reasons was provided. The impact on the model of the Carter factor theory and assumption that air have permeability of the PM were analyzed. The propositions for the further model development are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän diplomityön tarkoituksena oli kehittää teollisuusyrityksen toimittajahallintaa suorituskykymittauksen avulla. Työssä suunnitellaan ja toteutetaan toimittajien suorituskykymittaristo teoriatiedon avulla. Projektin lopputuloksena case-yrityksellä on käytössään kokonaisvaltainen toimittajien suorituskykymittaristo ja kehittyneet toimittajahallinnan prosessit. Työn teoriaosuudessa syvennytään erilaisiin suorituskykymittariston kehittämisen viitekehyksiin. Lisäksi teoriaosuudessa käsitellään hyvän suorituskykymittariston vaatimuksia ja suositeltuja mittauksen seuranta-alueita. Työn empiriaosuudessa rakennetaan suorituskykymittaristo vaihe vaiheelta perustuen tapaustutkimukseen. Kehitettyä suorituskykymittaristoa testattiin simulaatiovaiheen aikana case-yrityksen kolmella tehtaalla. Käyttäjäpalautteen perusteella mittaristo täytti teoriaosuudessa löydetyt tärkeimmät vaatimukset suorituskykymittaristolle. Mittaristo on helppokäyttöinen, visuaalinen ja sen raportoinnista saa arvokasta informaatiota toimittajien arviointiin. Diplomityön konkreettisia hyötyjä case-yritykselle ovat olleet laatupoikkeamien kehittynyt havaitseminen ja toimittajan hintakilpailukyvyn parantunut arviointi. Kaikkein merkittävin hyöty case-yritykselle on kuitenkin ollut systemaattinen toimittaja-arviointiprosessi, minkä suorituskykymittaristo mahdollistaa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Korjauspalveluissa aikaa vieviä tapauksia ovat mikropiirien vaikeasti paikannettavat viat. Tällaista vianetsintää varten yrityksemme oli ostanut Polar Fault Locator 780 –mittalaitteen, jolla voidaan mitata mikropiirien toimintaa käyttämällä analogista tunnisteanalyysiä. Diplomityön tavoitteena oli selvittää, miten mittaustapaa voidaan käyttää korjauspalveluissa. Tutkintaa lähestyttiin joidenkin tyypillisten komponenttien näkökulmasta, mutta pääpaino oli mikropiireissä. Joitain mikropiirejä vaurioitettiin tahallisesti, jolloin mittaustulokset uusittiin ja tutkittiin miten vaurioituminen näkyy mittaustuloksissa. Tutkimusmenetelmänä oli kirjallisuus ja empiirinen kokeellisuus. Diplomityön tuloksena oli, että tätä mittaustapaa käyttämällä mikropiirien kuntoa voidaan tutkia. Ongelmiksi osoittautuivat alkuperäinen oletus mittalaitteen tuloksien tulkinnasta ja taustamateriaalin heikko saatavuus. Täten mittalaite parhaiten soveltuu tilanteisiin, joissa sen antamia tuloksia verrataan suoraan toisen toimivaksi tunnetun yksikön mittaustuloksiin. Vaurioitettaessa komponenteissa oli kuitenkin havaittavissa selvä poikkeavuus.