965 resultados para Cumulative Distribution Function
Resumo:
A method to estimate an extreme quantile that requires no distributional assumptions is presented. The approach is based on transformed kernel estimation of the cumulative distribution function (cdf). The proposed method consists of a double transformation kernel estimation. We derive optimal bandwidth selection methods that have a direct expression for the smoothing parameter. The bandwidth can accommodate to the given quantile level. The procedure is useful for large data sets and improves quantile estimation compared to other methods in heavy tailed distributions. Implementation is straightforward and R programs are available.
Resumo:
Erilaisten IP-pohjaisten palvelujen käyttö lisääntyy jatkuvasti samalla, kun käyttäjistä tulee yhä liikkuvaisempia. Tästä syystä IP- protokolla tulee väistämättä myös mobiiliverkkoihin. Tässä diplomityössä tutkitaan mobiliteetin IP multcastingiin tuomia ongelmia ja simuloidaan niitä Network Simulatoria käyttäen. Pääpaino on ongelmalla, joka aiheutuu multicast- ryhmänmuodostusviiveestä. Tätä ongelmaa simuloidaan, jotta viiveen, mobiilikäyttäjien palveluunsaapumistaajuuden ja Scalable Reliable Multicast (SRM) protokollan ajastinarvojen asetusten vaikutus repair request- pakettien määrään ja sitä kautta suoritettavien uudelleenlähetysten määrään selviäisi. Eri parametrien vaikutuksen tutkimiseksi esitetään simulaatiotuloksia varioiduilla parametreillä käyttäen CDF- käyriä. Tulosten perusteella merkittävin tekijä uudelleenlähetyspyyntöjen kannalta on protokollan ajastimien arvot ja haluttu palvelun taso, viiveen merkityksen jäädessä vähäiseksi. Työn lopuksi tutkitaan SRM- protokollan soveltuvuutta mobiiliverkkoihin ja pohditaan vaihtoehtoja toiminnan parantamiseksi.
Resumo:
A statistical indentation method has been employed to study the hardness value of fire-refined high conductivity copper, using nanoindentation technique. The Joslin and Oliver approach was used with the aim to separate the hardness (H) influence of copper matrix, from that of inclusions and grain boundaries. This approach relies on a large array of imprints (around 400 indentations), performed at 150 nm of indentation depth. A statistical study using a cumulative distribution function fit and Gaussian simulated distributions, exhibits that H for each phase can be extracted when the indentation depth is much lower than the size of the secondary phases. It is found that the thermal treatment produces a hardness increase, due to the partly re-dissolution of the inclusions (mainly Pb and Sn) in the matrix.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
In this paper, a simple relation between the Leimkuhler curve and the mean residual life is established. The result is illustrated with several models commonly used in informetrics, such as exponential, Pareto and lognormal. Finally, relationships with some other reliability concepts are also presented. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The aim of this thesis is to evaluate the quality of public spending on education for the municipalities of the Metropolitan Region of Natal (RMN) in 2009 by use of two theories: The Theory of Welfare (Welfare State) and the Public Choice Theory (TEP), both important to understand the relationship between education and economics. The study also uses principles of microeconomics and public sector economics to get a better idea of the role of education in economy and society. It describes the development of the educational policy in Brazil from 1988 to the Federal Constitution of 2010, following the major changes in basic education during each government. The characteristics of the RMN municipalities were illustrated with socioeconomic indicators, while educational indicators were used to characterize each municipality regarding education. The model used in this study was developed by Bertê, Brunet and Borges, the data was collected on the back of the School Census 2009 and the Brazil Exam 2009 and it was processed quantitavely in the Information System on Public Budgets in Education (SIOPE) by use of the statistical method called standardized score of the normal cumulative distribution function. The quality of public spending on education is the result of the relation between performance indicator ratio and expense ratio. For the qualitative analysis of results, the criteria of efficiency, efficacy and effectiveness were used. The study found that municipalities with higher expenses showed a worse quality of spending and failed to convert the expenditure incurred into performance, thus confirming ineffectiveness
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In the instrumental records of daily precipitation, we often encounter one or more periods in which values below some threshold were not registered. Such periods, besides lacking small values, also have a large number of dry days. Their cumulative distribution function is shifted to the right in relation to that for other portions of the record having more reliable observations. Such problems are examined in this work, based mostly on the two-sample Kolmogorov–Smirnov (KS) test, where the portion of the series with more number of dry days is compared with the portion with less number of dry days. Another relatively common problem in daily rainfall data is the prevalence of integers either throughout the period of record or in some part of it, likely resulting from truncation during data compilation prior to archiving or by coarse rounding of daily readings by observers. This problem is identified by simple calculation of the proportion of integers in the series, taking the expected proportion as 10%. The above two procedures were applied to the daily rainfall data sets from the European Climate Assessment (ECA), Southeast Asian Climate Assessment (SACA), and Brazilian Water Resources Agency (BRA). Taking the statistic D of the KS test >0.15 and the corresponding p-value <0.001 as the condition to classify a given series as suspicious, the proportions of the ECA, SACA, and BRA series falling into this category are, respectively, 34.5%, 54.3%, and 62.5%. With relation to coarse rounding problem, the proportions of series exceeding twice the 10% reference level are 3%, 60%, and 43% for the ECA, SACA, and BRA data sets, respectively. A simple way to visualize the two problems addressed here is by plotting the time series of daily rainfall for a limited range, for instance, 0–10 mm day−1.
Resumo:
Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
Resumo:
The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.
Resumo:
En esta tesis se propone un procedimiento para evaluar la resistencia mecánica de obleas de silicio cristalino y se aplica en diferentes casos válidos para la industria. En el sector de la industria fotovoltaica predomina la tecnología basada en paneles de silicio cristalino. Estos paneles están compuestos por células solares conectadas en serie y estas células se forman a partir de obleas de silicio. Con el objetivo de disminuir el coste del panel, en los últimos años se ha observado una clara tendencia a la reducción del espesor de las obleas. Esta reducción del espesor modifica la rigidez de las obleas por lo que ha sido necesario modificar la manera tradicional de manipularlas con el objetivo de mantener un bajo ratio de rotura. Para ello, es necesario conocer la resistencia mecánica de las obleas. En la primera parte del trabajo se describen las obleas de silicio, desde su proceso de formación hasta sus propiedades mecánicas. Se muestra la influencia de la estructura cristalográfica en la resistencia y en el comportamiento ya que el cristal de silicio es anisótropo. Se propone también el método de caracterización de la resistencia. Se utiliza un criterio probabilista basado en los métodos de dimensionamiento de materiales frágiles en el que la resistencia queda determinada por los parámetros de la ley de Weibull triparamétrica. Se propone el procedimiento para obtener estos parámetros a partir de campañas de ensayos, modelización numérica por elementos finitos y un algoritmo iterativo de ajuste de los resultados. En la segunda parte de la tesis se describen los diferentes tipos de ensayos que se suelen llevar a cabo con este material. Se muestra además, para cada uno de los ensayos descritos, un estudio comparativo de diferentes modelos de elementos finitos simulando los ensayos. Se comparan tanto los resultados aportados por cada modelo como los tiempos de cálculo. Por último, se presentan tres aplicaciones diferentes donde se ha aplicado este procedimiento de estudio. La primera aplicación consiste en la comparación de la resistencia mecánica de obleas de silicio en función del método de crecimiento del lingote. La resistencia de las tradicionales obleas monocristalinas obtenidas por el método Czochralski y obleas multicristalinas es comparada con las novedosas obleas quasi-monocristalinas obtenidas por métodos de fundición. En la segunda aplicación se evalúa la profundidad de las grietas generadas en el proceso de corte del lingote en obleas. Este estudio se realiza de manera indirecta: caracterizando la resistencia de grupos de obleas sometidas a baños químicos de diferente duración. El baño químico reduce el espesor de las obleas eliminando las capas más dañadas. La resistencia de cada grupo es analizada y la comparación permite obtener la profundidad de las grietas generadas en el proceso de corte. Por último, se aplica este procedimiento a un grupo de obleas con características muy especiales: obleas preparadas para formar células de contacto posterior EWT. Estas obleas presentan miles de agujeros que las debilitan considerablemente. Se aplica el procedimiento de estudio propuesto con un grupo de estas obleas y se compara la resistencia obtenida con un grupo de referencia. Además, se propone un método simplificado de estudio basado en la aplicación de una superficie de intensificación de tensiones. ABSTRACT In this thesis, a procedure to evaluate the mechanical strength of crystalline silicon wafers is proposed and applied in different studies. The photovoltaic industry is mainly based on crystalline silicon modules. These modules are composed of solar cells which are based on silicon wafers. Regarding the cost reduction of solar modules, a clear tendency to use thinner wafers has been observed during last years. Since the stiffness varies with thickness, the manipulation techniques need to be modified in order to guarantee a low breakage rate. To this end, the mechanical strength has to be characterized correctly. In the first part of the thesis, silicon wafers are described including the different ways to produce them and the mechanical properties of interest. The influence of the crystallographic structure in the strength and the behaviour (the anisotropy of the silicon crystal) is shown. In addition, a method to characterize the mechanical strength is proposed. This probabilistic procedure is based on methods to characterize brittle materials. The strength is characterized by the values of the three parameters of the Weibull cumulative distribution function (cdf). The proposed method requires carrying out several tests, to simulate them through Finite Element models and an iterative algorithm in order to estimate the parameters of the Weibull cdf. In the second part of the thesis, the different types of test that are usually employed with these samples are described. Moreover, different Finite Element models for the simulation of each test are compared regarding the information supplied by each model and the calculation times. Finally, the method of characterization is applied to three examples of practical applications. The first application consists in the comparison of the mechanical strength of silicon wafers depending on the ingot growth method. The conventional monocrystalline wafers based on the Czochralski method and the multicrystalline ones are compared with the new quasi-monocrystalline substrates. The second application is related to the estimation of the crack length caused by the drilling process. An indirect way is used to this end: several sets of silicon wafers are subjected to chemical etchings of different duration. The etching procedure reduces the thickness of the wafers removing the most damaged layers. The strength of each set is obtained by means of the proposed method and the comparison permits to estimate the crack length. At last, the procedure is applied to determine the strength of wafers used for the design of back-contact cells of type ETW. These samples are drilled in a first step resulting in silicon wafers with thousands of tiny holes. The strength of the drilled wafers is obtained and compared with the one of a standard set without holes. Moreover, a simplified approach based on a stress intensification surface is proposed.
Resumo:
El vidrio no puede ser tratado como un material estructural convencional desde el punto de vista de la resistencia mecánica. Su naturaleza, como material frágil, junto con la inevitable presencia de microfisuras en su superficie y las consecuencias de accidentes por posibles fallos, exigen métodos rigurosos que garanticen un cálculo seguro de los elementos estructurales de vidrio, cuya resistencia a rotura depende en gran medida del tamaño del elemento y del tipo de carga a la que está sometido. Por lo tanto, su cálculo debe basarse en conceptos probabilísticos y en criterios de mecánica de la fractura, en sustitución de un cálculo convencional de vidrio según tablas deducidas de programas experimentales y posterior aplicación del concepto de tensiones admisibles. Con el fin de analizar y comparar las características mecánicas de vidrios templados, termoendurecidos y recocidos, se realizó un amplio programa experimental de ensayos de flexión a cuatro puntos y de anillos concéntricos de pequeña superficie, seguido de un ajuste de los resultados mediante una función de distribución triparamétrica de Weibull. Glass cannot be handled as a conventional structural material from the point of view of the mechanical strength. Its nature as brittle material, together with the inevitable presence of micro-cracks on its surface and the consequences of eventual failures, demand rigorous methods to achieve a safe design for glass elements, whose stress resistance is very much dependent on the integrity of its surface, element size and loading pattern. Thus, its design must rely on probabilistic concepts and fracture mechanics criteria, substitutive of the conventional glass design based on charts derived from experimental programs and subsequent application of the admissible stress concept. In order to analyze and compare the strength characteristics of tempered, heat-strengthened and annealed glass, a large experimental programme based on four-point bending and coaxial double ring tests was performed and the results were fitted using a three-parameter Weibull cumulative distribution function.
Resumo:
La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.
Resumo:
Os controladores eletrônicos de pulverização visam minimizar a variação das taxas de insumos aplicadas no campo. Eles fazem parte de um sistema de controle, e permitem a compensação da variação de velocidade de deslocamento do pulverizador durante a operação. Há vários tipos de controladores eletrônicos de pulverização disponíveis no mercado e uma das formas de selecionar qual o mais eficiente nas mesmas condições, ou seja, em um mesmo sistema de controle, é quantificar o tempo de resposta do sistema para cada controlador específico. O objetivo desse trabalho foi estimar os tempos de resposta para mudanças de velocidade de um sistema eletrônico de pulverização via modelos de regressão não lineares, estes, resultantes da soma de regressões lineares ponderadas por funções distribuição acumulada. Os dados foram obtidos no Laboratório de Tecnologia de Aplicação, localizado no Departamento de Engenharia de Biossistemas da Escola Superior de Agricultura \"Luiz de Queiroz\", Universidade de São Paulo, no município de Piracicaba, São Paulo, Brasil. Os modelos utilizados foram o logístico e de Gompertz, que resultam de uma soma ponderada de duas regressões lineares constantes com peso dado pela função distribuição acumulada logística e Gumbell, respectivamente. Reparametrizações foram propostas para inclusão do tempo de resposta do sistema de controle nos modelos, com o objetivo de melhorar a interpretação e inferência estatística dos mesmos. Foi proposto também um modelo de regressão não linear difásico que resulta da soma ponderada de regressões lineares constantes com peso dado pela função distribuição acumulada Cauchy seno hiperbólico exponencial. Um estudo de simulação foi feito, utilizando a metodologia de Monte Carlo, para avaliar as estimativas de máxima verossimilhança dos parâmetros do modelo.