935 resultados para Polygonal faults
Resumo:
The Southwest Iberian Margin is caracterized by an intense and diffuse seismic activity due to the convergence between Eurasian and African plates...
Resumo:
Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.
Resumo:
We've developed a new ambient occlusion technique based on an information-theoretic framework. Essentially, our method computes a weighted visibility from each object polygon to all viewpoints; we then use these visibility values to obtain the information associated with each polygon. So, just as a viewpoint has information about the model's polygons, the polygons gather information on the viewpoints. We therefore have two measures associated with an information channel defined by the set of viewpoints as input and the object's polygons as output, or vice versa. From this polygonal information, we obtain an occlusion map that serves as a classic ambient occlusion technique. Our approach also offers additional applications, including an importance-based viewpoint-selection guide, and a means of enhancing object features and producing nonphotorealistic object visualizations
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
The influence of natural aging furthered by atmospheric corrosion of parts of electric transformers and materials, as well as of concrete poles and cross arms containing corrosion inhibitors was evaluated in Manaus. Results for painted materials, it could showed that loss of specular gloss was more intensive in aliphatic polyurethane points than in acrylic polyurethane ones. No corrosion was observed for metal and concrete samples until 400 days of natural aging. Corrosion in steel reinforcement was noticed in some poles, arising from manufacturing faults, such as low cement content, water/cement ratio, thin concrete cover thickness, etc. The performance of corrosion inhibitors was assessed by many techniques after natural and accelerated aging in a 3.5% saline aqueous solution. The results show the need for better chemical component selection and its concentration in the concrete mixture.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels
Resumo:
Testaustapausten valitseminen on testauksessa tärkeää, koska kaikkia testaustapauksia ei voida testata aika- ja raharajoitteiden takia. Testaustapausten valintaan on paljon eri menetelmiä joista eniten esillä olevat ovat malleihin perustuva valinta, kombinaatiovalinta ja riskeihin perustuva valinta. Kaikkiin edellä mainittuihin menetelmiin testaustapaukset luodaan ohjelman spesifikaation perusteella. Malleihin perustuvassa menetelmässä käytetään hyväksi ohjelman toiminnasta olevia malleja, joista valitaan tärkeimmät testattavaksi. Kombinaatiotestauksessa testitapaukset on muodostettu ominaisuuspareina jolloin yhden parin testaamisesta päätellään kahden ominaisuuden toiminta. Kombinaatiotestaus on tehokas löytämään virheitä, jotka johtuvat yhdestä tai kahdesta tekijästä. Riskeihin perustuva testaus pyrkii arvioimaan ohjelman riskejä ja valitsemaan testitapaukset niiden perusteella. Kaikissa menetelmissä priorisointi on tärkeässä roolissa, jotta testauksesta saadaan riittävä luotettavuus ilman kustannusten nousua.
Resumo:
This thesis is done as a part of project called FuncMama that is a project between Technical Research Centre of Finland (VTT), Oulu University (OY), Lappeenranta University of Technology (LUT) and Finnish industrial partners. Main goal of the project is to manufacture electric and mechanical components from mixed materials using laser sintering. Aim of this study was to create laser sintered pieces from ceramic material and monitor the sintering event by using spectrometer. Spectrometer is a device which is capable to record intensity of different wavelengths in relation with time. In this study the monitoring of laser sintering was captured with the equipment which consists of Ocean Optics spectrometer, optical fiber and optical lens (detector head). Light from the sintering process hit first to the lens system which guides the light in to the optical fibre. Optical fibre transmits the light from the sintering process to the spectrometer where wavelengths intensity level information is detected. The optical lens of the spectrometer was rigidly set and did not move along with the laser beam. Data which was collected with spectrometer from the laser sintering process was converted with Excel spreadsheet program for result’s evaluation. Laser equipment used was IPG Photonics pulse fibre laser. Laser parameters were kept mainly constant during experimental part and only sintering speed was changed. That way it was possible to find differences in the monitoring results without fear of too many parameters mixing together and affecting to the conclusions. Parts which were sintered had one layer and size of 5 x 5 mm. Material was CT2000 – tape manufactured by Heraeus which was later on post processed to powder. Monitoring of different sintering speeds was tested by using CT2000 reference powder. Moreover tests how different materials effect to the process monitoring were done by adding foreign powder Du Pont 951 which had suffered in re-grinding and which was more reactive than CT2000. By adding foreign material it simulates situation where two materials are accidently mixed together and it was studied if that can be seen with the spectrometer. It was concluded in this study that with the spectrometer it is possible to detect changes between different laser sintering speeds. When the sintering speed is lowered the intensity level of light is higher from the process. This is a result of higher temperature at the sintering spot and that can be noticed with the spectrometer. That indicates it could be possible to use spectrometer as a tool for process observation and support the idea of having system that can help setting up the process parameter window. Also important conclusion was how well the adding of foreign material could be seen with the spectrometer. When second material was added a significant intensity level raise could be noticed in that part where foreign material was mixed. That indicates it is possible to see if there are any variations in the material or if there are more materials mixed together. Spectrometric monitoring of laser sintering could be useful tool for process window observation and temperature controlling of the sintering process. For example if the process window for specific material is experimentally determined to get wanted properties and satisfying sintering speed. It is possible if the data is constantly recorded that the results can show faults in the part texture between layers. Changes between the monitoring data and the experimentally determined values can then indicate changes in the material being generated by material faults or by wrong process parameters. The results of this study show that spectrometer could be one possible tool for monitoring. But to get in that point where this all can be made possible much more researching is needed.
Resumo:
Painelaitteet ja kemikaaliputkistot ovat yleisiä varsinkin kemian alan yrityksissä. Omista-jan ja haltijan on tiedettävä niihin liittyvä lainsäädäntö, ja osattava soveltaa niitä käytän-töön. Työssä on selvitetty painelaitteisiin ja kemikaaliputkistoihin liittyvän lainsäädännön olen-naisin sisältö. Nykyisin painelaitteiden määräaikaistarkastuksia voidaan korvata painelait-teen seurannalla ja kunnonvalvontajärjestelmällä. Mitä se käytännössä tarkoittaa, ja onko Suomessa mahdollisuutta kuinka laajasti hyödynnetty? On tärkeää tuntea painelaitteiden ja kemikaaliputkistojen vikaantumismekanismit ja kunnonvalvontamenetelmät, jotta yritys voi luoda omaan toimintaympäristöön soveltuvan kunnonvalvontajärjestelmän tai paine-laitteiden seurannan. On pyrittävä siihen, että ongelmat havaitaan, ennen kuin vaurio syn-tyy. Kunnonvalvontajärjestelmän luomiseen ja ylläpitoon voidaan hyödyntää painelaittei-den riskiperusteiseen kunnossapitoon ja tarkastukseen tarkoitettua menettelyä. Työssä on tarkemmin käyty läpi menettelyn sisältö. Kustannustehokkuus ja tuotantolaitteiden käytettävyys ovat nousseet tärkeiksi osa-alueiksi kilpailukyvyn takaamiseksi. Painelaitteiden tarkastuksista ja muista painelaitteisiin ja ke-mikaaliputkistoihin liittyvistä ennakkohuolloista syntyvät kustannukset ovat merkittäviä. Näihin kohdistuvia kustannussäästöjä voidaan saavuttaa monin keinoin huonontamatta kuitenkaan turvallisuutta. Työssä on selvitetty kohdeyrityksen painelaitteet ja kemikaali-putkistot. Lisäksi on käyty läpi nykyinen tarkastusjaksotus, keskeisimmät viat ja tarkastuk-siin liittyvää kustannushistoriaa. Lopputuloksena on syntynyt esiselvitys ja kemikaaliputkistojen hankkimista helpottavaa tietoa. Esiselvityksen avulla kohdeyritys voi yhdessä kunnossapitoyrityksen kanssa laatia strategian painelaitteiden kunnossapidolle ja tarkastuksille. Merkittäviä kustannussäästöjä on saavutettavissa ehdotetuilla jatkotoimenpiteillä, vaikka päädyttäisiin edelleen painelait-teiden tarkastuksien osalta noudattamaan olemassa olevaa käytäntöä.
Resumo:
Existing electricity distribution system is under pressure because implementation of distributed generation changes the grid configuration and also because some customers demand for better distribution reliability. In a short term, traditional network planning does not offer techno-economical solutions for the challenges and therefore the idea of microgrids is introduced. Islanding capability of microgrids is expected to enable better reliability by reducing effects of faults. The aim of the thesis is to discuss challenges in integration of microgrids into distribution networks. Study discusses development of microgrid related smart grid features and gives estimation of the guideline of microgrid implementation. Thesis also scans microgrid pilots around the world and introduces the most relevant projects. Analysis reveals that the main focus of researched studies is on low voltage microgrids. This thesis extends the idea to medium voltage distribution system and introduces challenges related to medium voltage microgrid implementation. Differences of centralized and distributed microgrid models are analyzed and the centralized model is discovered to be easiest to implement into existing distribution system. Preplan of medium voltage microgrid pilot is also carried out in this thesis.
Resumo:
Tutkimuksessa tarkastellaan hevoskauppaa ja sen erityispiirteitä suhtees-sa irtaimen kauppaa säätelevään lainsäädäntöön. Tavoitteena on tuoda esiin hevosesta johtuvia erityispiirteitä ja pohtia sovellettavan lainsäädän-nön soveltuvuutta hevosen kauppaan. Tarkastelu kohdentuu erityisesti hevoselle ominaisiin laatuvirheen muotoihin, kuten sairauksiin ja fyysisiin vikoihin. Lisäksi tarkastellaan, miten esiin nostettuihin erityispiirteisiin on varauduttu hevoskauppaan tarkoitetuissa kauppasopimusmalleissa. Hevoskauppaan pätee sama lainsäädäntö kuin minkä tahansa muun ir-taimen tavaran kauppaan. Virheeseen varautuminen yksityiskohtaisesti sopimalla korostuu hevoskaupassa, koska täydellistä ja virheetöntä hevosta ei ole olemassakaan. Myyjän näkökulmasta onnistuneessa kauppaprosessissa neuvotellaan virheen mahdollisuus niin pieneksi kuin se on mahdollista. Tähän instrumentit antavat lainsäädäntö ja erityisesti sopimusoikeudelliset keinot. Ostajan näkökulmasta onnistuneessa kaupassa hän on kyennyt asiantuntijoihin tukeutuen tunnistamaan ennalta hevosen viat ja muut epämieluisat ominaisuudet. Nämä tiedostaen ostaja määrittää, mitä virheitä on valmis hyväksymään. Hevoskaupassa riidan välittömät ja välilliset kustannukset suhteessa maksettuun kauppahintaan voivat nousta korkeiksi. Hevoskaupassa usein ainoa kirjallinen dokumentti on omistajanvaihdosilmoitus, eikä kauppasopimuksen laatiminen kirjallisena ole itsestäänselvyys. Yksinomaan hevoskauppaan liittyvien juridisten kysymysten tekeminen näkyväksi on tärkeää. Irtaimen kauppaa säätelevä lainsäädäntö ei kuitenkaan tunnista riittävällä tarkkuudella hevosen erityispiirteitä persoonaesineenä, vaan väliin tarvi-taan vakiosopimusten kaltaisia apuvälineitä kaupan osapuolten tueksi.
Resumo:
Suorituskyvyn mittaamisella on monia myönteisiä vaikutuksia koko organisaation toimintaan. Mittaamisen avulla toimintaa voidaan johtaa haluttuun suuntaan. Tutkimuksen tavoitteena oli tutkia, minkälainen suorituskykymittaristo tarvitaan katsastusyrityksen ylimmän johdon käyttöön, jotta katsastuksen teknisen laadun johtaminen mahdollistuisi. Tutkimuksen tarkoituksena oli rakentaa katsastuksen teknisen laadun suorituskykymittaristo ylimmälle johdolle. Katsastuksen tekninen laatu on keskeinen kysymys katsastusyritysten olemassaololle. Tekninen laatu on koko katsastustoiminnan perusta, jonka päälle liiketoiminta voidaan rakentaa. Ilman tätä perustaa ei ole jatkuvuutta liiketoiminnalle. Teknisen laadun mittaaminen ei kuitenkaan ole tällä hetkellä järjestelmällistä, eikä käytettävissä ole ollut tehtävään soveltuvaa mittaristoa. Tutkimuksessa käytettiin A-Katsastus Oy:n vuosien 2008–2011 aikana syntyneitä katsastustilastoja. Tilastollista prosessin valvonta-menetelmää (SPC) soveltamalla määritettiin toimipaikka- ja katsastajakohtaiset valvontarajat hylkäysprosenteille ja vikojen määrille. Valvontarajojen avulla rakennettiin katsastuksen teknisen laadun suorituskykymittaristo toimiala-, yritys-, toimipaikka- ja katsastajatasoille. Mittariston avulla voidaan asettaa tekniselle laadulle tavoitteet, seurata tavoitteiden toteumaa ja käynnistää tarvittaessa korjaavat toimenpiteet.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
-
Resumo:
Both healthy eyes of 10 six-year-old male and female mongrel dogs were studied. With a contact specular microscope the corneal endothelium was examined. Endothelial cells were analyzed in the central and peripheral cornea. Morphological analysis with regard to polymegathism and pleomorphism was performed. Three images of each region with at least 100 cells were obtained. The analysis showed that polygonal cells formed a mosaic-like pattern uniform in size and shape. The predominant number of cells was hexagonal. The polymegathism index was 0.22. The study demonstrates that the morphology of the normal corneal endothelial cells of dogs is similar to that found in the human cornea.