942 resultados para errors and erasures decoding
Resumo:
Colorectal cancer (CRC) has traditionally been classified into two groups: microsatellite stable/low-level instability (MSS/MSI-L) and high-level MSI (MSI-H) groups on the basis of multiple molecular and clinicopathologic criteria. Using methylated in tumor (MINT) markers 1, 2,12, and 31, we stratified 77 primary CRCs into three groups: MINT++ (>2), MINT+ (1-2), and MINT- (0 markers methylated). The MSS/MSI-L/ MINT++ group was indistinguishable from the MSI-H/MINT++ group with respect to methylation of p16(INK4a), p14(ARF), and RIZ1, and multiple morphological features. The only significant difference between MSI-H and non-MSI-H MINT++ cancers was the higher frequency of K-ras mutation (P < 0.004) and lower frequency of hMLH1 methylation (P < 0.001) in the latter. These data demonstrate that the separation of CRC into two nonoverlapping groups (MSI-H versus MSS/MSI-L) is a misleading oversimplification.
Resumo:
Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.
Resumo:
This study compared an enzyme-linked immunosorbent assay (ELISA) to a liquid chromatography-tandem mass spectrometry (LC/MS/MS) technique for measurement of tacrolimus concentrations in adult kidney and liver transplant recipients, and investigated how assay choice influenced pharmacokinetic parameter estimates and drug dosage decisions. Tacrolimus concentrations measured by both ELISA and LC/MS/MS from 29 kidney (n = 98 samples) and 27 liver (n = 97 samples) transplant recipients were used to evaluate the performance of these methods in the clinical setting. Tacrolimus concentrations measured by the two techniques were compared via regression analysis. Population pharmacokinetic models were developed independently using ELISA and LC/MS/MS data from 76 kidney recipients. Derived kinetic parameters were used to formulate typical dosing regimens for concentration targeting. Dosage recommendations for the two assays were compared. The relation between LC/MS/MS and ELISA measurements was best described by the regression equation ELISA = 1.02 . (LC/MS/MS) + 0.14 in kidney recipients, and ELISA = 1.12 . (LC/MS/MS) - 0.87 in liver recipients. ELISA displayed less accuracy than LC/MS/MS at lower tacrolimus concentrations. Population pharmacokinetic models based on ELISA and LC/MS/MS data were similar with residual random errors of 4.1 ng/mL and 3.7 ng/mL, respectively. Assay choice gave rise to dosage prediction differences ranging from 0% to 30%. ELISA measurements of tacrolimus are not automatically interchangeable with LC/MS/MS values. Assay differences were greatest in adult liver recipients, probably reflecting periods of liver dysfunction and impaired biliary secretion of metabolites. While the majority of data collected in this study suggested assay differences in adult kidney recipients were minimal, findings of ELISA dosage underpredictions of up to 25% in the long term must be investigated further.
Resumo:
Fixed-point roundoff noise in digital implementation of linear systems arises due to overflow, quantization of coefficients and input signals, and arithmetical errors. In uniform white-noise models, the last two types of roundoff errors are regarded as uniformly distributed independent random vectors on cubes of suitable size. For input signal quantization errors, the heuristic model is justified by a quantization theorem, which cannot be directly applied to arithmetical errors due to the complicated input-dependence of errors. The complete uniform white-noise model is shown to be valid in the sense of weak convergence of probabilistic measures as the lattice step tends to zero if the matrices of realization of the system in the state space satisfy certain nonresonance conditions and the finite-dimensional distributions of the input signal are absolutely continuous.
Resumo:
A sample of recombinant inbred lines (RILs) was derived from a bi-parental cross between Lemont and BK88-BR6, which contrasted in maintenance of leaf water potential (LWP) and expression of osmotic adjustment (OA). Genotypic variation for LWP and OA, and their associations with yield determination under water deficit, was studied in a series of five field experiments. Genotypic variation in the maintenance of high LWP was consistent across water deficit experiments. In the determination of genotypic variation in the maintenance of LWP, rate of water deficit was not an important factor influencing ranking, but degree of water deficit, and phenological development stage were important, particularly around heading. Genotypic variation in expression of OA was also observed under water deficits during both vegetative and flowering stages but ranking was inconsistent across experiments. This was in part because of large experimental errors associated with its measurement, but also because the expression of OA was associated with extent of decline of LWP. The relationship between OA and LWP was demonstrated when data were combined across experiments for vegetative and flowering stages. Under water-limited conditions around flowering, grain yield reduction was mainly due to a increased spikelet sterility. Variation in OA was not related to grain yield nor yield components. There were however, negative phenotypic and genetic correlations between LWP and percentage spikelet sterility measured at flowering stage on panicles at the same development stage during a water deficit treatment. This suggests that traits contributing to the maintenance of high LWP minimized the effects of water deficit on spikelet sterility and consequently grain yield. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Expression of membrane-bound Fas ligand (FasL) by colorectal cancer cells may allow the development of an immune-privileged site by eliminating incoming tumour-infiltrating lymphocytes (TILs) in a Fas-mediated counter-attack. Sporadic colorectal cancer can be subdivided into three groups based on the level of DNA microsatellite instability (NISI). High-level NISI (NISI-High) is characterized by the presence of TILs and a favourable prognosis, while microsatellite-stable (MSS) cancers are TIL-deficient and low-level MSI (MSI-Low) is associated with an intermediate TIL density. The purpose of this study was to establish the relationship between MSI status and FasL expression in primary colorectal adenocarcinoma. Using immunohistochemistry and a selected series of 101 cancers previously classified as 31 MSI-High, 30 NISI-Low, and 40 MISS, the present study sought to confirm the hypothesis that increased TIL density in MSI-High cancers is associated with low or absent membrane-bound FasL expression, while increased FasL in MSS cancers allows the killing of host TILs. TUNEL/CD3 double staining was also used to determine whether MSS cancers contain higher numbers of apoptotic TILs in vivo than MSI-High or MSI-Low cancers. Contrary to the initial hypothesis, it was found that MSI-High cancers were associated with higher FasL expression (p = 0.04) and a stronger intensity of FasL staining (p = 0.007). In addition, mucinous carcinomas were independently characterized by increased FasL expression (p = 0.03) and staining intensity (p = 0.0005). Higher FasL expression and staining intensity did not correlate with reduced TIL density or increased numbers of apoptotic TILs. However, consistent with the hypothesis that curtailment of the host anti-tumour immune response contributes to the poor prognosis in MSS cancers, it was found that apoptotic TILs were most abundant in MSS carcinomas and metastatic Dukes' stage C or D tumours (p = 0.004; p = 0.046 respectively). This study therefore suggests that MSS colorectal cancers are killing incoming TILs in an effective tumour counter-attack, but apparently not via membrane-bound FasL. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Three new peptidomimetics (1-3) have been developed with highly stable and conformationally constrained macrocyclic components that replace tripeptide segments of protease substrates. Each compound inhibits both HIV-1 protease and viral replication (HIV-I, HIV-2) at nanomolar concentrations without cytotoxicity to uninfected cells below 10 mu M. Their activities against HIV-1 protease (K-i 1.7 nM (1), 0.6 nM (2), 0.3 nM (3)) are 1-2 orders of magnitude greater than their antiviral potencies against HIV-1-infected primary peripheral blood mononuclear cells (IC50 45 nM (1), 56 nM (2), 95 nM (3)) or HIV-1-infected MT2 cells (IC50 90 nM (1), 60 nM (2)), suggesting suboptimal cellular uptake. However their antiviral potencies are similar to those of indinavir and amprenavir under identical conditions. There were significant differences in their capacities to inhibit the replication of HIV-1 and HIV-2 in infected MT2 cells, 1 being ineffective against HIV-2 while 2 was equally effective against both virus types. Evidence is presented that 1 and 2 inhibit cleavage of the HIV-1 structural protein precursor Pr55(gag) to p24 in virions derived from chronically infected cells, consistent with inhibition of the viral protease in cells. Crystal structures refined to 1.75 Angstrom (1) and 1.85 Angstrom (2) for two of the macrocyclic inhibitors bound to HIV-1 protease establish structural mimicry of the tripeptides that the cycles were designed to imitate. Structural comparisons between protease-bound macrocyclic inhibitors, VX478 (amprenavir), and L-735,524 (indinavir) show that their common acyclic components share the same space in the active site of the enzyme and make identical interactions with enzyme residues. This substrate-mimicking minimalist approach to drug design could have benefits in the context of viral resistance, since mutations which induce inhibitor resistance may also be those which prevent substrate processing.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
We analyze the sequences of round-off errors of the orbits of a discretized planar rotation, from a probabilistic angle. It was shown [Bosio & Vivaldi, 2000] that for a dense set of parameters, the discretized map can be embedded into an expanding p-adic dynamical system, which serves as a source of deterministic randomness. For each parameter value, these systems can generate infinitely many distinct pseudo-random sequences over a finite alphabet, whose average period is conjectured to grow exponentially with the bit-length of the initial condition (the seed). We study some properties of these symbolic sequences, deriving a central limit theorem for the deviations between round-off and exact orbits, and obtain bounds concerning repetitions of words. We also explore some asymptotic problems computationally, verifying, among other things, that the occurrence of words of a given length is consistent with that of an abstract Bernoulli sequence.
Resumo:
The current level of demand by customers in the electronics industry requires the production of parts with an extremely high level of reliability and quality to ensure complete confidence on the end customer. Automatic Optical Inspection (AOI) machines have an important role in the monitoring and detection of errors during the manufacturing process for printed circuit boards. These machines present images of products with probable assembly mistakes to an operator and him decide whether the product has a real defect or if in turn this was an automated false detection. Operator training is an important aspect for obtaining a lower rate of evaluation failure by the operator and consequently a lower rate of actual defects that slip through to the following processes. The Gage R&R methodology for attributes is part of a Six Sigma strategy to examine the repeatability and reproducibility of an evaluation system, thus giving important feedback on the suitability of each operator in classifying defects. This methodology was already applied in several industry sectors and services at different processes, with excellent results in the evaluation of subjective parameters. An application for training operators of AOI machines was developed, in order to be able to check their fitness and improve future evaluation performance. This application will provide a better understanding of the specific training needs for each operator, and also to accompany the evolution of the training program for new components which in turn present additional new difficulties for the operator evaluation. The use of this application will contribute to reduce the number of defects misclassified by the operators that are passed on to the following steps in the productive process. This defect reduction will also contribute to the continuous improvement of the operator evaluation performance, which is seen as a quality management goal.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.