20 resultados para Error correction methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance standards for Positron emission tomography (PET) were developed to be able to compare systems from different generations and manufacturers. This resulted in the NEMA methodology in North America and the IEC in Europe. In practices, the NEMA NU 2- 2001 is the method of choice today. These standardized methods allow assessment of the physical performance of new commercial dedicated PET/CT tomographs. The point spread in image formation is one of the factors that blur the image. The phenomenon is often called the partial volume effect. Several methods for correcting for partial volume are under research but no real agreement exists on how to solve it. The influence of the effect varies in different clinical settings and it is likely that new methods are needed to solve this problem. Most of the clinical PET work is done in the field of oncology. The whole body PET combined with a CT is the standard investigation today in oncology. Despite the progress in PET imaging technique visualization, especially quantification of small lesions is a challenge. In addition to partial volume, the movement of the object is a significant source of error. The main causes of movement are respiratory and cardiac motions. Most of the new commercial scanners are in addition to cardiac gating, also capable of respiratory gating and this technique has been used in patients with cancer of the thoracic region and patients being studied for the planning of radiation therapy. For routine cardiac applications such as assessment of viability and perfusion only cardiac gating has been used. However, the new targets such as plaque or molecular imaging of new therapies require better control of the cardiac motion also caused by respiratory motion. To overcome these problems in cardiac work, a dual gating approach has been proposed. In this study we investigated the physical performance of a new whole body PET/CT scanner with NEMA standard, compared methods for partial volume correction in PET studies of the brain and developed and tested a new robust method for dual cardiac-respiratory gated PET with phantom, animal and human data. Results from performance measurements showed the feasibility of the new scanner design in 2D and 3D whole body studies. Partial volume was corrected, but there is no best method among those tested as the correction also depends on the radiotracer and its distribution. New methods need to be developed for proper correction. The dual gating algorithm generated is shown to handle dual-gated data, preserving quantification and clearly eliminating the majority of contraction and respiration movement

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein engineering aims to improve the properties of enzymes and affinity reagents by genetic changes. Typical engineered properties are affinity, specificity, stability, expression, and solubility. Because proteins are complex biomolecules, the effects of specific genetic changes are seldom predictable. Consequently, a popular strategy in protein engineering is to create a library of genetic variants of the target molecule, and render the population in a selection process to sort the variants by the desired property. This technique, called directed evolution, is a central tool for trimming protein-based products used in a wide range of applications from laundry detergents to anti-cancer drugs. New methods are continuously needed to generate larger gene repertoires and compatible selection platforms to shorten the development timeline for new biochemicals. In the first study of this thesis, primer extension mutagenesis was revisited to establish higher quality gene variant libraries in Escherichia coli cells. In the second study, recombination was explored as a method to expand the number of screenable enzyme variants. A selection platform was developed to improve antigen binding fragment (Fab) display on filamentous phages in the third article and, in the fourth study, novel design concepts were tested by two differentially randomized recombinant antibody libraries. Finally, in the last study, the performance of the same antibody repertoire was compared in phage display selections as a genetic fusion to different phage capsid proteins and in different antibody formats, Fab vs. single chain variable fragment (ScFv), in order to find out the most suitable display platform for the library at hand. As a result of the studies, a novel gene library construction method, termed selective rolling circle amplification (sRCA), was developed. The method increases mutagenesis frequency close to 100% in the final library and the number of transformants over 100-fold compared to traditional primer extension mutagenesis. In the second study, Cre/loxP recombination was found to be an appropriate tool to resolve the DNA concatemer resulting from error-prone RCA (epRCA) mutagenesis into monomeric circular DNA units for higher efficiency transformation into E. coli. Library selections against antigens of various size in the fourth study demonstrated that diversity placed closer to the antigen binding site of antibodies supports generation of antibodies against haptens and peptides, whereas diversity at more peripheral locations is better suited for targeting proteins. The conclusion from a comparison of the display formats was that truncated capsid protein three (p3Δ) of filamentous phage was superior to the full-length p3 and protein nine (p9) in obtaining a high number of uniquely specific clones. Especially for digoxigenin, a difficult hapten target, the antibody repertoire as ScFv-p3Δ provided the clones with the highest affinity for binding. This thesis on the construction, design, and selection of gene variant libraries contributes to the practical know-how in directed evolution and contains useful information for scientists in the field to support their undertakings.