21 resultados para Full-width at half mediums


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Full contour monolithic zirconia restorations have shown an increased popularity in the dental field over the recent years, owing to its mechanical and acceptable optical properties. However, many features of the restoration are yet to be researched and supported by clinical studies to confirm its place among the other indirect restorative materials This series of in vitro studies aimed at evaluating and comparing the optical and mechanical properties, light cure irradiance, and cement polymerization of multiple monolithic zirconia material at variable thicknesses, environments, treatments, and stabilization. Five different monolithic zirconia materials, four of which were partially stabilized and one fully stabilized were investigated. The optical properties in terms of surface gloss, translucency parameter, and contrast ratio were determined via a reflection spectrophotometer at variable thicknesses, coloring, sintering method, and after immersion in an acidic environment. Light cure irradiance and radiant exposure were quantified through the specimens at variable thicknesses and the degree of conversion of two dual-cure cements was determined via Fourier Transform Infrared spectroscopy. Bi-axial flexural strength was evaluated to compare between the partially and fully stabilized zirconia prepared using different coloring and sintering methods. Surface characterization was performed using a scanning electron microscope and a spinning disk confocal microscope. The surface gloss and translucency of the zirconia investigated were brand and thickness dependent with the translucency values decreasing as the thickness increased. Staining decreased the translucency of the zirconia and enhanced surface gloss as well as the flexural strength of the fully stabilized zirconia but had no effect on partially stabilized zirconia. Immersion in a corrosive acid increased surface gloss and decreased the translucency of some zirconia brands. Zirconia thickness was inversely related to the amount of light irradiance, radiant exposure, and degree of monomer conversion. Type of sintering furnace had no effect on the optical and mechanical properties of zirconia. Monolithic zirconia maybe classified as a semi-translucent material that is well influenced by the thickness, limiting its use in the esthetic zones. Conventional acid-base reaction, autopolymerizing and dual-cure cements are recommended for its cementation. Its desirable mechanical properties give it a high potential as a restoration for posterior teeth. However, close monitoring with controlled clinical studies must be determined before any definite clinical recommendations can be drawn.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän raportin tavoitteena oli käsitellä leveiden keskimerkintöjen liikenneturvallisuusvaikutuksia, toteuttaa mielipidekysely kantatie 54 kokeiluosuuden vaikutusalueella sekä esitellä aiemmin tehdyt tutkimukset ja leveiden keskimerkintöjen taustaa. Raportin on laatinut Eino Lahtinen Hämeen ammattikorkeakoulusta opinnäytetyönään. Työn taustateoriaksi syvennyttiin leveiden keskimerkintöjen historiaan Suomessa ja ulkomailla sekä Suomessa tehtyihin tutkimuksiin aiheesta. Suomessa ja ulkomailla on havaittu jo vuosikymmenten ajan korkealuokkaisten teiden liikenneturvallisuuden heikentyneen liikenteen lisääntyessä. Kohtaamis- ja ohitusonnettomuuksien jäljet ovat usein raakaa katseltavaa ja luettavaa. Huoli kohtaamis- ja ohitusonnettomuuksien määristä ja niiden vakavuuksista painostaa etsimään ratkaisuja tämän kaltaisten onnettomuuksien vähentämiseksi. Tästä syystä on 1980-luvulta lähtien aktiivisesti etsitty liikenneturvallisuutta parantavia kustannustehokkaita ratkaisuja. Visuaalisia vaikutuksia arvioidessa havaittiin jo varhain kaistan leveyden visuaalisen kaventamisen johtavan ajonopeuksien laskuun, joka vähentää onnettomuuksia ja niiden vakavuuksia. Yhdeksi ratkaisuksi esitettiin leveää keskimerkintää, josta on ollut erilaisia kokeiluja ulkomailla 1996 ja Suomessa 2009 lähtien. Työn tuloksena voidaan esittää, että leveiden keskimerkintöjen vaikutukset ajokäyttäytymiseen ovat melko vähäiset, sillä lähes puolet kyselyyn vastanneista ei havainnut muutosta omassa tai muiden ajokäyttäytymisessä. Työn perusteella arvioidaan, että leveät keskimerkinnät vähensivät henkilövahinkoon johtaneita ohitus-, kohtaamis- ja vasemmalle suistumisonnettomuuksia 17 – 21 %. Tutkimuksen perusteella voidaan todeta, että leveitä keskimerkintöjä käytettäessä on varmistuttava riittävästä piennarleveydestä (vähintään 1 m). Leveää keskimerkintää voidaan käyttää alueilla, jossa tien päällysteen leveys on vähintään 10 m, KVL vähintään 1500 ajon/vrk ja tiejakson pituus vähintään 10 km. Leveän keskialueen tulisi olla metrin levyinen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, an infrared thermography based sensor was studied with regard to usability and the accuracy of sensor data as a weld penetration signal in gas metal arc welding. The object of the study was to evaluate a specific sensor type which measures thermography from solidified weld surface. The purpose of the study was to provide expert data for developing a sensor system in adaptive metal active gas (MAG) welding. Welding experiments with considered process variables and recorded thermal profiles were saved to a database for further analysis. To perform the analysis within a reasonable amount of experiments, the process parameter variables were gradually altered by at least 10 %. Later, the effects of process variables on weld penetration and thermography itself were considered. SFS-EN ISO 5817 standard (2014) was applied for classifying the quality of the experiments. As a final step, a neural network was taught based on the experiments. The experiments show that the studied thermography sensor and the neural network can be used for controlling full penetration though they have minor limitations, which are presented in results and discussion. The results are consistent with previous studies and experiments found in the literature.