966 resultados para SLASHED HALF-NORMAL DISTRIBUTION
Resumo:
El presente trabajo tiene como objetivo general el análisis de las técnicas de diseño y optimización de redes topográficas, observadas mediante topografía convencional (no satelital) el desarrollo e implementación de un sistema informático capaz de ayudar a la definición de la geometría más fiable y precisa, en función de la orografía del terreno donde se tenga que ubicar. En primer lugar se realizará un estudio de la metodología del ajuste mediante mínimos cuadrados y la propagación de varianzas, para posteriormente analizar su dependencia de la geometría que adopte la red. Será imprescindible determinar la independencia de la matriz de redundancia (R) de las observaciones y su total dependencia de la geometría, así como la influencia de su diagonal principal (rii), números de redundancia, para garantizar la máxima fiabilidad interna de la misma. También se analizará el comportamiento de los números de redundancia (rii) en el diseño de una red topográfica, la variación de dichos valores en función de la geometría, analizando su independencia respecto de las observaciones así como los diferentes niveles de diseño en función de los parámetros y datos conocidos. Ha de señalarse que la optimización de la red, con arreglo a los criterios expuestos, está sujeta a los condicionantes que impone la necesidad de que los vértices sean accesibles, y además sean visibles entre sí, aquellos relacionados por observaciones, situaciones que dependen esencialmente del relieve del terreno y de los obstáculos naturales o artificiales que puedan existir. Esto implica la necesidad de incluir en el análisis y en el diseño, cuando menos de un modelo digital del terreno (MDT), aunque lo más útil sería la inclusión en el estudio del modelo digital de superficie (MDS), pero esta opción no siempre será posible. Aunque el tratamiento del diseño esté basado en un sistema bidimensional se estudiará la posibilidad de incorporar un modelo digital de superficie (MDS); esto permitirá a la hora de diseñar el emplazamiento de los vértices de la red la viabilidad de las observaciones en función de la orografía y los elementos, tanto naturales como artificiales, que sobre ella estén ubicados. Este sistema proporcionaría, en un principio, un diseño óptimo de una red constreñida, atendiendo a la fiabilidad interna y a la precisión final de sus vértices, teniendo en cuenta la orografía, lo que equivaldría a resolver un planteamiento de diseño en dos dimensiones y media1; siempre y cuando se dispusiera de un modelo digital de superficie o del terreno. Dado que la disponibilidad de obtener de manera libre el MDS de las zonas de interés del proyecto, hoy en día es costoso2, se planteará la posibilidad de conjuntar, para el estudio del diseño de la red, de un modelo digital del terreno. Las actividades a desarrollar en el trabajo de esta tesis se describen en esta memoria y se enmarcan dentro de la investigación para la que se plantean los siguientes objetivos globales: 1. Establecer un modelo matemático del proceso de observación de una red topográfica, atendiendo a todos los factores que intervienen en el mismo y a su influencia sobre las estimaciones de las incógnitas que se obtienen como resultado del ajuste de las observaciones. 2. Desarrollar un sistema que permita optimizar una red topográfica en sus resultados, aplicando técnicas de diseño y simulación sobre el modelo anterior. 3. Presentar una formulación explícita y rigurosa de los parámetros que valoran la fiabilidad de una red topográfica y de sus relaciones con el diseño de la misma. El logro de este objetivo se basa, además de en la búsqueda y revisión de las fuentes, en una intensa labor de unificación de notaciones y de construcción de pasos intermedios en los desarrollos matemáticos. 4. Elaborar una visión conjunta de la influencia del diseño de una red, en los seis siguientes factores (precisiones a posteriori, fiabilidad de las observaciones, naturaleza y viabilidad de las mismas, instrumental y metodología de estacionamiento) como criterios de optimización, con la finalidad de enmarcar el tema concreto que aquí se aborda. 5. Elaborar y programar los algoritmos necesarios para poder desarrollar una aplicación que sea capaz de contemplar las variables planteadas en el apartado anterior en el problema del diseño y simulación de redes topográficas, contemplando el modelo digital de superficie. Podrían considerarse como objetivos secundarios, los siguientes apartados: Desarrollar los algoritmos necesarios para interrelacionar el modelo digital del terreno con los propios del diseño. Implementar en la aplicación informática la posibilidad de variación, por parte del usuario, de los criterios de cobertura de los parámetros (distribución normal o t de Student), así como los grados de fiabilidad de los mismos ABSTRACT The overall purpose of this work is the analysis of the techniques of design and optimization for geodetic networks, measured with conventional survey methods (not satellite), the development and implementation of a computational system capable to help on the definition of the most liable and accurate geometry, depending on the land orography where the network has to be located. First of all, a study of the methodology by least squares adjustment and propagation of variances will be held; then, subsequently, analyze its dependency of the geometry that the network will take. It will be essential to determine the independency of redundancy matrix (R) from the observations and its absolute dependency from the network geometry, as well as the influence of the diagonal terms of the R matrix (rii), redundancy numbers, in order to ensure maximum re liability of the network. It will also be analyzed first the behavior of redundancy numbers (rii) in surveying network design, then the variation of these values depending on the geometry with the analysis of its independency from the observations, and finally the different design levels depending on parameters and known data. It should be stated that network optimization, according to exposed criteria, is subject to the accessibility of the network points. In addition, common visibility among network points, which of them are connected with observations, has to be considered. All these situations depends essentially on the terrain relief and the natural or artificial obstacles that should exist. Therefore, it is necessary to include, at least, a digital terrain model (DTM), and better a digital surface model (DSM), not always available. Although design treatment is based on a bidimensional system, the possibility of incorporating a digital surface model (DSM) will be studied; this will allow evaluating the observations feasibility based on the terrain and the elements, both natural and artificial, which are located on it, when selecting network point locations. This system would provide, at first, an optimal design of a constrained network, considering both the internal reliability and the accuracy of its points (including the relief). This approach would amount to solving a “two and a half dimensional”3 design, if a digital surface model is available. As the availability of free DSM4 of the areas of interest of the project today is expensive, the possibility of combining a digital terrain model will arise. The activities to be developed on this PhD thesis are described in this document and are part of the research for which the following overall objectives are posed: 1. To establish a mathematical model for the process of observation of a survey network, considering all the factors involved and its influence on the estimates of the unknowns that are obtained as a result of the observations adjustment. 2. To develop a system to optimize a survey network results, applying design and simulation techniques on the previous model. 3. To present an explicit and rigorous formulation of parameters which assess the reliability of a survey network and its relations with the design. The achievement of this objective is based, besides on the search and review of sources, in an intense work of unification of notation and construction of intermediate steps in the mathematical developments. 4. To develop an overview of the influence on the network design of six major factors (posterior accuracy, observations reliability, viability of observations, instruments and station methodology) as optimization criteria, in order to define the subject approached on this document. 5. To elaborate and program the algorithms needed to develop an application software capable of considering the variables proposed in the previous section, on the problem of design and simulation of surveying networks, considering the digital surface model. It could be considered as secondary objectives, the following paragraphs: To develop the necessary algorithms to interrelate the digital terrain model with the design ones. To implement in the software application the possibility of variation of the coverage criteria parameters (normal distribution or Student t test) and therefore its degree of reliability.
Resumo:
Neste trabalho, foi proposta uma nova família de distribuições, a qual permite modelar dados de sobrevivência quando a função de risco tem formas unimodal e U (banheira). Ainda, foram consideradas as modificações das distribuições Weibull, Fréchet, half-normal generalizada, log-logística e lognormal. Tomando dados não-censurados e censurados, considerou-se os estimadores de máxima verossimilhança para o modelo proposto, a fim de verificar a flexibilidade da nova família. Além disso, um modelo de regressão locação-escala foi utilizado para verificar a influência de covariáveis nos tempos de sobrevida. Adicionalmente, conduziu-se uma análise de resíduos baseada nos resíduos deviance modificada. Estudos de simulação, utilizando-se de diferentes atribuições dos parâmetros, porcentagens de censura e tamanhos amostrais, foram conduzidos com o objetivo de verificar a distribuição empírica dos resíduos tipo martingale e deviance modificada. Para detectar observações influentes, foram utilizadas medidas de influência local, que são medidas de diagnóstico baseadas em pequenas perturbações nos dados ou no modelo proposto. Podem ocorrer situações em que a suposição de independência entre os tempos de falha e censura não seja válida. Assim, outro objetivo desse trabalho é considerar o mecanismo de censura informativa, baseado na verossimilhança marginal, considerando a distribuição log-odd log-logística Weibull na modelagem. Por fim, as metodologias descritas são aplicadas a conjuntos de dados reais.
Resumo:
Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014
Resumo:
The objective is to study beta-amyloid (Abeta) deposition in dementia with Lewy bodies (DLB) with Alzheimer's disease (AD) pathology (DLB/AD). The size frequency distributions of the Abeta deposits were studied and fitted by log-normal and power-law models. Patients were ten clinically and pathologically diagnosed DLB/AD cases. Size distributions had a single peak and were positively skewed and similar to those described in AD and Down's syndrome. Size distributions had smaller means in DLB/AD than in AD. Log-normal and power-law models were fitted to the size distributions of the classic and diffuse deposits, respectively. Size distributions of Abeta deposits were similar in DLB/AD and AD. Size distributions of the diffuse deposits were fitted by a power-law model suggesting that aggregation/disaggregation of Abeta was the predominant factor, whereas the classic deposits were fitted by a log-normal distribution suggesting that surface diffusion was important in the pathogenesis of the classic deposits.
Resumo:
The visual evoked magnetic response CIIm component to a pattern onset stimulus presented half field produced a consistent scalp topography in 15 normal subjects. The major response was seen over the contralateral hemisphere, suggesting a dipole with current flowing away from the medial surface of the brain. Full field responses were more unpredictable. The reponses of five subjects were studied to the onset of a full, left half and right half checkerboard stimuli of 38 x 27 min arc checks appearing for 200 ms. In two subjects the full field CIIm topography was consistent with that of the mathematical summation of their relevant half field distribution. The remaining subjects had unpredictable full field topographies, showing little or no relationship to their half or summated half fields. In each of these subjects, a distribution matching that of the summated half field CIIm distribution appears at an earlier latency than that of the predominant full field waveform peak. By examining the topography of the full and half field responses at 5 ms intervals along the waveform for one such subject, the CIIm topography of the right hemisphere develops 10 ms before that of the left hemisphere, and is replaced by the following CIIIm component 20 ms earlier. Hence, the large peak seen in full field results from a combination of the CIIm component of the left hemisphere plus that of the CIIIm from the right. The earlier peak results from the CIIm generated in both hemispheres, at a latency where both show similar amplitudes. As the relative amplitudes of these two peaks alter with check and field size, topographic studies would be required for accurate CIIm identification. In addition. the CIIm-CIIIm complex lasts for 80 ms in the right hemisphere and 135 ms in the left, suggesting hemispherical apecialization in the visual processing of the pattern onset response.
Resumo:
The size frequency distributions of discrete β-amyloid (Aβ) deposits were studied in single sections of the temporal lobe from patients with Alzheimer's disease. The size distributions were unimodal and positively skewed. In 18/25 (72%) tissues examined, a log normal distribution was a good fit to the data. This suggests that the abundances of deposit sizes are distributed randomly on a log scale about a mean value. Three hypotheses were proposed to account for the data: (1) sectioning in a single plane, (2) growth and disappearance of Aβ deposits, and (3) the origin of Aβ deposits from clusters of neuronal cell bodies. Size distributions obtained by serial reconstruction through the tissue were similar to those observed in single sections, which would not support the first hypothesis. The log normal distribution of Aβ deposit size suggests a model in which the rate of growth of a deposit is proportional to its volume. However, mean deposit size and the ratio of large to small deposits were not positively correlated with patient age or disease duration. The frequency distribution of Aβ deposits which were closely associated with 0, 1, 2, 3, or more neuronal cell bodies deviated significantly from a log normal distribution, which would not support the neuronal origin hypothesis. On the basis of the present data, growth and resolution of Aβ deposits would appear to be the most likely explanation for the log normal size distributions.
Resumo:
Abnormally enlarged neurons (AEN) occur in many neurodegenerative diseases. To define AEN more objectively, the frequency distribution of the ratio of greatest cell diameter(CD) to greatest nuclear diameter (ND) was studied in populations of cortical neurons in tissue sections of seven cognitively normal brains. The frequency distribution of CD/ND deviated from a normal distribution in 15 out of 18 populations of neurons studied and hence, the 95th percentile (95P) was used to define a limit of the CD/ND ratio excluding the5% most extreme observations. The 95P of the CD/ ND ratio varied from 2.0 to 3.0 in different cases and regions and a value of 95P = 3.0 was chosen to define the limit for normalneurons under non-pathological conditions. Based on the 95P = 3.0 criterion, the proportion of AEN with a CD/ND ≥ 3 varied from 2.6% in Alzheimer's disease (AD) to 20.3% in Pick's disease (PiD). The data suggest: (1) that a CL/ND ≥ 3.0 may be a useful morphological criterion for defining AEN, and (2) AEN were most numerous in PiD and corticobasal degeneration (CBD) and least abundant in AD and in dementia with Lewy bodies (DLB). © 2013 Dustri-Verlag Dr. K. Feistle.
Resumo:
In many of the Statnotes described in this series, the statistical tests assume the data are a random sample from a normal distribution These Statnotes include most of the familiar statistical tests such as the ‘t’ test, analysis of variance (ANOVA), and Pearson’s correlation coefficient (‘r’). Nevertheless, many variables exhibit a more or less ‘skewed’ distribution. A skewed distribution is asymmetrical and the mean is displaced either to the left (positive skew) or to the right (negative skew). If the mean of the distribution is low, the degree of variation large, and when values can only be positive, a positively skewed distribution is usually the result. Many distributions have potentially a low mean and high variance including that of the abundance of bacterial species on plants, the latent period of an infectious disease, and the sensitivity of certain fungi to fungicides. These positively skewed distributions are often fitted successfully by a variant of the normal distribution called the log-normal distribution. This statnote describes fitting the log-normal distribution with reference to two scenarios: (1) the frequency distribution of bacterial numbers isolated from cloths in a domestic environment and (2), the sizes of lichenised ‘areolae’ growing on the hypothalus of Rhizocarpon geographicum (L.) DC.
Resumo:
In previous Statnotes, many of the statistical tests described rely on the assumption that the data are a random sample from a normal or Gaussian distribution. These include most of the tests in common usage such as the ‘t’ test ), the various types of analysis of variance (ANOVA), and Pearson’s correlation coefficient (‘r’) . In microbiology research, however, not all variables can be assumed to follow a normal distribution. Yeast populations, for example, are a notable feature of freshwater habitats, representatives of over 100 genera having been recorded . Most common are the ‘red yeasts’ such as Rhodotorula, Rhodosporidium, and Sporobolomyces and ‘black yeasts’ such as Aurobasidium pelculans, together with species of Candida. Despite the abundance of genera and species, the overall density of an individual species in freshwater is likely to be low and hence, samples taken from such a population will contain very low numbers of cells. A rare organism living in an aquatic environment may be distributed more or less at random in a volume of water and therefore, samples taken from such an environment may result in counts which are more likely to be distributed according to the Poisson than the normal distribution. The Poisson distribution was named after the French mathematician Siméon Poisson (1781-1840) and has many applications in biology, especially in describing rare or randomly distributed events, e.g., the number of mutations in a given sequence of DNA after exposure to a fixed amount of radiation or the number of cells infected by a virus given a fixed level of exposure. This Statnote describes how to fit the Poisson distribution to counts of yeast cells in samples taken from a freshwater lake.
Resumo:
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.
Resumo:
Евелина Илиева Велева - Разпределението на Уишарт се среща в практиката като разпределението на извадъчната ковариационна матрица за наблюдения над многомерно нормално разпределение. Изведени са някои маргинални плътности, получени чрез интегриране на плътността на Уишарт разпределението. Доказани са необходими и достатъчни условия за положителна определеност на една матрица, които дават нужните граници за интегрирането.
Resumo:
Lognormal distribution has abundant applications in various fields. In literature, most inferences on the two parameters of the lognormal distribution are based on Type-I censored sample data. However, exact measurements are not always attainable especially when the observation is below or above the detection limits, and only the numbers of measurements falling into predetermined intervals can be recorded instead. This is the so-called grouped data. In this paper, we will show the existence and uniqueness of the maximum likelihood estimators of the two parameters of the underlying lognormal distribution with Type-I censored data and grouped data. The proof was first established under the case of normal distribution and extended to the lognormal distribution through invariance property. The results are applied to estimate the median and mean of the lognormal population.
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.
Resumo:
The aim of this study was to evaluate the degree of conversion (DC) and the cytotoxicity of photo-cured experimental resin composites containing 4-(N,N-dimethylamino)phenethyl alcohol (DMPOH) combined to the camphorquinone (CQ) compared with ethylamine benzoate (EDAB). The resin composites were mechanically blended using 35 wt% of an organic matrix and 65 wt% of filler loading. To this matrix was added 0.2 wt% of CQ and 0.2 wt% of one of the reducing agents tested. 5x1 mm samples (n=5) were previously submitted to DC measurement and then pre-immersed in complete culture medium without 10% (v/v) bovine serum for 1 h or 24 h at 37 °C in a humidifier incubator with 5% CO2 and 95% humidity to evaluate the cytotoxic effects of experimental resin composites using the MTT assay on immortalized human keratinocytes cells. As a result of absence of normal distribution, the statistical analysis was performed using the nonparametric Kruskal-Wallis to evaluate the cytotoxicity and one-way analysis of variance to evaluate the DC. For multiple comparisons, cytotoxicity statistical analyses were submitted to Student-Newman-Keuls and DC analysis to Tukey's HSD post-hoc test (=0.05). No significant differences were found between the DC of DMPOH (49.9%) and EDAB (50.7%). 1 h outcomes showed no significant difference of the cell viability between EDAB (99.26%), DMPOH (94.85%) and the control group (100%). After 24 h no significant difference were found between EDAB (48.44%) and DMPOH (38.06%), but significant difference was found compared with the control group (p>0.05). DMPOH presented similar DC and cytotoxicity compared with EDAB when associated with CQ.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física