849 resultados para Null values
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositional data. Nevertheless, the use of this methodology is only possible for those data sets without null values. Consequently, in those data sets where the zeros are present, a previous treatment becomes necessary. Last advances in the treatment of compositional zeros have been centered especially in the zeros of structural nature and in the rounded zeros. These tools do not contemplate the particular case of count compositional data sets with null values. In this work we deal with \count zeros" and we introduce a treatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichlet probability distribution as a prior and we estimate the posterior probabilities. Then we apply a multiplicative modi¯cation for the non-zero values. We present a case study where this new methodology is applied. Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
We investigated the impact of GLUT2 gene inactivation on the regulation of hepatic glucose metabolism during the fed to fast transition. In control and GLUT2-null mice, fasting was accompanied by a approximately 10-fold increase in plasma glucagon to insulin ratio, a similar activation of liver glycogen phosphorylase and inhibition of glycogen synthase and the same elevation in phosphoenolpyruvate carboxykinase and glucose-6-phosphatase mRNAs. In GLUT2-null mice, mobilization of glycogen stores was, however, strongly impaired. This was correlated with glucose-6-phosphate (G6P) levels, which remained at the fed values, indicating an important allosteric stimulation of glycogen synthase by G6P. These G6P levels were also accompanied by a paradoxical elevation of the mRNAs for L-pyruvate kinase. Re-expression of GLUT2 in liver corrected the abnormal regulation of glycogen and L-pyruvate kinase gene expression. Interestingly, GLUT2-null livers were hyperplasic, as revealed by a 40% increase in liver mass and 30% increase in liver DNA content. Together, these data indicate that in the absence of GLUT2, the G6P levels cannot decrease during a fasting period. This may be due to neosynthesized glucose entering the cytosol, being unable to diffuse into the extracellular space, and being phosphorylated back to G6P. Because hepatic glucose production is nevertheless quantitatively normal, glucose produced in the endoplasmic reticulum may also be exported out of the cell through an alternative, membrane traffic-based pathway, as previously reported (Guillam, M.-T., Burcelin, R., and Thorens, B. (1998) Proc. Natl. Acad. Sci. U. S. A. 95, 12317-12321). Therefore, in fasting, GLUT2 is not required for quantitative normal glucose output but is necessary to equilibrate cytosolic glucose with the extracellular space. In the absence of this equilibration, the control of hepatic glucose metabolism by G6P is dominant over that by plasma hormone concentrations.
Resumo:
Myostatin is a potent inhibitor of muscle development. Genetic deletion of myostatin in mice results in muscle mass increase, with muscles often weighing three times their normal values. Contracting muscle transfers tension to skeletal elements through an elaborate connective tissue network. Therefore, the connective tissue of skeletal muscle is an integral component of the contractile apparatus. Here we examine the connective tissue architecture in myostatin null muscle. We show that the hypertrophic muscle has decreased connective tissue content compared with wild-type muscle. Secondly, we show that the hypertrophic muscle fails to show the normal increase in muscle connective tissue content during ageing. Therefore, genetic deletion of myostatin results in an increase in contractile elements but a decrease in connective tissue content. We propose a model based on the contractile profile of muscle fibres that reconciles this apparent incompatible tissue composition phenotype.
Resumo:
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set.
Resumo:
We discuss the relation between correlation functions of twist-two large spin operators and expectation values of Wilson loops along light-like trajectories. After presenting some heuristic field theoretical arguments suggesting this relation, we compute the divergent part of the correlator in the limit of large 't Hooft coupling and large spins, using a semi-classical world-sheet which asymptotically looks like a GKP rotating string. We show this diverges as expected from the expectation value of a null Wilson loop, namely, as (ln mu(-2))(2). mu being a cut-off of the theory. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Considering the measurement procedures recommended by the ICNIRP, this communication is a proposal for a measurement procedure based in the maximum peak values of equivalent plane wave power density. This procedure has been included in a project being developed in Leganés, Spain. The project plans to deploy a real time monitoring system for RF to provide the city with a useful tool to adapt the environmental EM conditions to the new regulations approved. A first stage consisting of 105 measurement points has been finished and all the values are under the threshold of the regulation.
Resumo:
Good results evaluating material properties using non-destructive testing (NDT) techniques have been achieved for decades. Several studies to understand the influence of temperature and moisture content on NDT have concluded different effects. In this study, NDT parameters were measured on the principal structural Spanish sawn timber species, Scots pine (Pinus sylvestris L.). NDT were conducted on 216 specimens of nominal dimensions 20 by 20 by 400 mm. Specimens were divided into several groups and studied at six different temperatures and four different moisture contents. Commercial equipment and techniques applied were Sylvatest Duo (ultrasonic wave technique), Steinkamp BPV (ultrasonic wave technique), and Grindo Sonic Mk5 "Industrial" (vibration analysis technique). Differences in NDT values within specimens at different temperatures and moisture contents were obtained. Main results of this study and relationships that describe changes in NDT values by effect of temperature and moisture content are presented.