252 resultados para histogram


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of forecast ensembles to probabilistic weather prediction has spurred considerable interest in their evaluation. Such ensembles are commonly interpreted as Monte Carlo ensembles meaning that the ensemble members are perceived as random draws from a distribution. Under this interpretation, a reasonable property to ask for is statistical consistency, which demands that the ensemble members and the verification behave like draws from the same distribution. A widely used technique to assess statistical consistency of a historical dataset is the rank histogram, which uses as a criterion the number of times that the verification falls between pairs of members of the ordered ensemble. Ensemble evaluation is rendered more specific by stratification, which means that ensembles that satisfy a certain condition (e.g., a certain meteorological regime) are evaluated separately. Fundamental relationships between Monte Carlo ensembles, their rank histograms, and random sampling from the probability simplex according to the Dirichlet distribution are pointed out. Furthermore, the possible benefits and complications of ensemble stratification are discussed. The main conclusion is that a stratified Monte Carlo ensemble might appear inconsistent with the verification even though the original (unstratified) ensemble is consistent. The apparent inconsistency is merely a result of stratification. Stratified rank histograms are thus not necessarily flat. This result is demonstrated by perfect ensemble simulations and supplemented by mathematical arguments. Possible methods to avoid or remove artifacts that stratification induces in the rank histogram are suggested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A comparison of the point forecasts and the probability distributions of inflation and output growth made by individual respondents to the US Survey of Professional Forecasters indicates that the two sets of forecasts are sometimes inconsistent. We evaluate a number of possible explanations, and find that not all forecasters update their histogram forecasts as new information arrives. This is supported by the finding that the point forecasts are more accurate than the histograms in terms of first-moment prediction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent literature has suggested that macroeconomic forecasters may have asymmetric loss functions, and that there may be heterogeneity across forecasters in the degree to which they weigh under- and over-predictions. Using an individual-level analysis that exploits the Survey of Professional Forecasters respondents’ histogram forecasts, we find little evidence of asymmetric loss for the inflation forecasters

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A potential problem with Ensemble Kalman Filter is the implicit Gaussian assumption at analysis times. Here we explore the performance of a recently proposed fully nonlinear particle filter on a high-dimensional but simplified ocean model, in which the Gaussian assumption is not made. The model simulates the evolution of the vorticity field in time, described by the barotropic vorticity equation, in a highly nonlinear flow regime. While common knowledge is that particle filters are inefficient and need large numbers of model runs to avoid degeneracy, the newly developed particle filter needs only of the order of 10-100 particles on large scale problems. The crucial new ingredient is that the proposal density cannot only be used to ensure all particles end up in high-probability regions of state space as defined by the observations, but also to ensure that most of the particles have similar weights. Using identical twin experiments we found that the ensemble mean follows the truth reliably, and the difference from the truth is captured by the ensemble spread. A rank histogram is used to show that the truth run is indistinguishable from any of the particles, showing statistical consistency of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scene classification based on latent Dirichlet allocation (LDA) is a more general modeling method known as a bag of visual words, in which the construction of a visual vocabulary is a crucial quantization process to ensure success of the classification. A framework is developed using the following new aspects: Gaussian mixture clustering for the quantization process, the use of an integrated visual vocabulary (IVV), which is built as the union of all centroids obtained from the separate quantization process of each class, and the usage of some features, including edge orientation histogram, CIELab color moments, and gray-level co-occurrence matrix (GLCM). The experiments are conducted on IKONOS images with six semantic classes (tree, grassland, residential, commercial/industrial, road, and water). The results show that the use of an IVV increases the overall accuracy (OA) by 11 to 12% and 6% when it is implemented on the selected and all features, respectively. The selected features of CIELab color moments and GLCM provide a better OA than the implementation over CIELab color moment or GLCM as individuals. The latter increases the OA by only ∼2 to 3%. Moreover, the results show that the OA of LDA outperforms the OA of C4.5 and naive Bayes tree by ∼20%. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JRS.8.083690]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Survey respondents who make point predictions and histogram forecasts of macro-variables reveal both how uncertain they believe the future to be, ex ante, as well as their ex post performance. Macroeconomic forecasters tend to be overconfident at horizons of a year or more, but overestimate (i.e., are underconfident regarding) the uncertainty surrounding their predictions at short horizons. Ex ante uncertainty remains at a high level compared to the ex post measure as the forecast horizon shortens. There is little evidence of a link between individuals’ ex post forecast accuracy and their ex ante subjective assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Application of the Bernhardt et al. (Journal of Financial Economics 2006; 80(3): 657–675) test of herding to the calendar-year annual output growth and inflation forecasts suggests forecasters tend to exaggerate their differences, except at the shortest horizon, when they tend to herd. We consider whether these types of behaviour can help to explain the puzzle that professional forecasters sometimes make point predictions and histogram forecasts which are mutually inconsistent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The issue of smoothing in kriging has been addressed either by estimation or simulation. The solution via estimation calls for postprocessing kriging estimates in order to correct the smoothing effect. Stochastic simulation provides equiprobable images presenting no smoothing and reproducing the covariance model. Consequently, these images reproduce both the sample histogram and the sample semivariogram. However, there is still a problem, which is the lack of local accuracy of simulated images. In this paper, a postprocessing algorithm for correcting the smoothing effect of ordinary kriging estimates is compared with sequential Gaussian simulation realizations. Based on samples drawn from exhaustive data sets, the postprocessing algorithm is shown to be superior to any individual simulation realization yet, at the expense of providing one deterministic estimate of the random function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve the quality of the image. Finally, a post processing technique is implemented to counter the undesirable effect in the segmented image. Fingerprint recognition system is one of the oldest recognition systems in biometrics techniques. Everyone have a unique and unchangeable fingerprint. Based on this uniqueness and distinctness, fingerprint identification has been used in many applications for a long period. A fingerprint image is a pattern which consists of two regions, foreground and background. The foreground contains all important information needed in the automatic fingerprint recognition systems. However, the background is a noisy region that contributes to the extraction of false minutiae in the system. To avoid the extraction of false minutiae, there are many steps which should be followed such as preprocessing and enhancement. One of these steps is the transformation of the fingerprint image from gray-scale image to black and white image. This transformation is called segmentation or binarization. The aim for fingerprint segmentation is to separate the foreground from the background. Due to the nature of fingerprint image, the segmentation becomes an important and challenging task. The proposed algorithm is applied on FVC2000 database. Manual examinations from human experts show that the proposed algorithm provides an efficient segmentation results. These improved results are demonstrating in diverse experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Foram estudadas, pelo método da assinatura digital, 35 biópsias esofágicas provenientes de pacientes da província de Linxian, China, classificadas por dois observadores com ampla experiência em patologia gastrointestinal como normais, displasias ou carcinomas (8 casos normais, 6 displasias leves, 8 displasias moderadas, 4 displasias acentuadas, 4 carcinomas suspeitos de invasão e 5 carcinomas invasores). O objetivo do trabalho foi caracterizar os núcleos das populações celulares desses casos de forma que permitisse a derivação de informações diagnósticas e de possível implicação prognóstica a partir do estudo quantitativo das características nucleares de cada caso ou categoria diagnóstica. As biópsias foram coradas pelo método de Feulgen, sendo então selecionados e digitalizados 48 a 50 núcleos de cada uma delas. De cada núcleo foram extraídas 93 características cariométricas, arranjadas arbitrariamente em histograma designado como assinatura nuclear. Da média aritmética de cada característica dos núcleos de uma mesma biópsia resultou a assinatura digital do caso. A análise de funções discriminantes, baseada nas 15 características cariométricas que ofereceram melhor discriminação entre as categorias diagnósticas, mostrou que o grupo classificado como normal foi claramente distinto das demais categorias. A densidade óptica total aumentou progressivamente segundo a classificação das biópsias, do normal à displasia acentuada, sendo o valor do carcinoma semelhante ao da displasia moderada. A matriz de comprimento de seqüência apresentou o mesmo perfil, ou seja, ambas as características ofereceram discriminação clara entre as categorias diagnósticas, com exceção do carcinoma invasor, cujos valores foram superponíveis aos da displasia moderada. O estudo demonstrou a viabilidade da quantificação de características nucleares através das assinaturas nucleares digitais, que demonstraram diferenças estatisticamente significativas entre diferentes categorias diagnósticas e a elevação progressiva dos valores mensurados relacionados com o espectro das lesões, apresentando-as como um histograma (assinatura digital nuclear).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O câncer colorretal é um tumor maligno freqüente no mundo ocidental. É o terceiro em freqüência e o segundo em mortalidade nos países desenvolvidos. No Brasil está entre as seis neoplasias malignas mais encontradas e a quinta em mortalidade. Dos tumores colorretais, aproximadamente 40% estão localizados no reto. A sobrevida, em cinco anos, dos pacientes operados por câncer do reto varia entre 40% e 50%, estando os principais fatores prognósticos, utilizados na prática clínica corrente, baseados em critérios de avaliação clínico-patológicos. A avaliação das alterações morfométricas e densimétricas nas neoplasias malignas tem, recentemente, sido estudadas e avaliadas através da análise de imagem digital e demonstrado possibilidades de utilização diagnóstica e prognóstica. A assinatura digital é um histograma representativo de conjuntos de características de textura da cromatina do núcleo celular obtida através da imagem computadorizada. O objetivo deste estudo foi a caracterização dos núcleos celulares neoplásicos no adenocarcinoma primário de reto pelo método da assinatura digital e verificar o valor prognóstico das alterações nucleares da textura da cromatina nuclear para esta doença. Foram avaliados, pelo método de análise de imagem digital, 51 casos de pacientes operados no Hospital de Clínicas de Porto Alegre (HCPA) entre 1988 e 1996 e submetidos à ressecção eletiva do adenocarcinoma primário de reto, com seguimento de cinco anos pós-operatório, ou até o óbito antes deste período determinado pela doença, e 22 casos de biópsias normais de reto obtidas de pacientes submetidos a procedimentos endoscópicos, para controle do método da assinatura digital. A partir dos blocos de parafina dos espécimes estocados no Serviço de Patologia do HCPA, foram realizadas lâminas coradas com hematoxilina e eosina das quais foram selecionados 3.635 núcleos dos adenocarcinomas de reto e 2.366 núcleos dos controles da assinatura digital, totalizando 6.001 núcleos estudados por análise de imagem digital. De cada um destes núcleos foram verificadas 93 características, sendo identificadas 11 características cariométricas com maior poder de discriminação entre as células normais e neoplásicas. Desta forma, através da verificação da textura da cromatina nuclear, foram obtidos os histogramas representativos de cada núcleo ou conjunto de núcleos dos grupos ou subgrupos estudados, também no estadiamento modificado de Dukes, dando origem às assinaturas digitais correspondentes. Foram verificadas as assinaturas nucleares, assinaturas de padrão histológico ou de lesões e a distribuição da Densidade Óptica Total. Houve diferença significativa das características entre o grupo normal e o grupo com câncer, com maior significância para três delas, a Área, a Densidade Óptica Total e a Granularidade nuclear. Os valores das assinaturas médias nucleares foram: no grupo normal 0,0009 e nos estadiamentos; 0,9681 no A, 4,6185 no B, 2,3957 no C e 2,1025 no D e diferiram com significância estatística (P=0,001). A maior diferença do normal ocorreu no subgrupo B de Dukes-Turnbull. As assinaturas nucleares e de padrão histológico mostraram-se distintas no grupo normal e adenocarcinoma, assim como a distribuição da Densidade Óptica Total a qual mostra um afastamento progressivo da normalidade no grupo com câncer. Foi possível a caracterização do adenocarcinoma de reto, que apresentou assinaturas digitais específicas. Em relação ao prognóstico, a Densidade Óptica Total representou a variável que obteve o melhor desempenho, além do estadiamento, como preditor do desfecho.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O segundo satélite da Missão Espacial Completa Brasileira (SCD2/MECB) foi colocado em órbita em 23 de Outubro de 1998 e carrega a bordo um experimento de células solares. Célula solar de silício é um dispositivo semicondutor, que pode medir a intensidade da radiação visível e parte da radiação infravermelha (400-1100 nm). O experimento permite medir simultaneamente a insolação direta e parte da radiação solar que é refletida pela Terra para o espaço. Os dados do experimento célula solar são transmitidos em tempo real pela telemetria do satélite e recebidos pela estação terrestre em Cuiabá, MT-Brasil (16°S; 56°W). Este fato limita a cobertura espacial para um círculo sobre a América do Sul. O albedo planetário é obtido dentro desta cobertura e seus valores podem ser agrupados em períodos temporais (anual, sazonal ou mensal), ou podem ser estudados para várias localizações (latitude e longitude) durante a vida do satélite. O coeficiente de transmissão atmosférica ou índice de claridade (Kt), medido em estações meteorológicas na superfície da Terra, junto com o valor medido simultaneamente do albedo planetário, permite calcular o coeficiente de absorção atmosférica (Ka). O método desenvolvido neste trabalho para avaliar Ka considera que o albedo planetário é composto por duas partes: uma refletividade local e uma refletividade não local. Considerando este novo conceito, é definida uma taxa de absorção atmosférica (denominada Ra) que é a razão entre Ka e a potência de irradiância solar líquida, que não atravessou a atmosfera (100%-Kt). A taxa de absorção atmosférica assim definida é independente da cobertura de nuvens. O histograma de freqüência de Ra mostra os valores de 0,86±0,07 e 0,88±0,09 sobre as cidades de Botucatu-SP e do Rio de Janeiro-RJ, durante os anos de 1999 até 2006, respectivamente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a study on the generation of digital masks aiming at edge detection with previously known directions. This solution is important when edge direction is available either from a direction histogram or from a prediction based on camera and object models. A modification in the non-maximum suppression method of thinning is also presented enabling the comparison of local maxima for any edge directions. Results with a synthetic image and with crops of a CBERS satellite images are presented showing an example with its application in road detection, provided that directions are previously known.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points