983 resultados para Inverse Gaussian Distribution
Resumo:
Endochondral calcification involves the participation of matrix vesicles (MVs), but it remains unclear whether calcification ectopically induced by implants of demineralized bone matrix also proceeds via MVs. Ectopic bone formation was induced by implanting rat demineralized diaphyseal bone matrix into the dorsal subcutaneous tissue of Wistar rats and was examined histologically and biochemically. Budding of MVs from chondrocytes was observed to serve as nucleation sites for mineralization during induced ectopic osteogenesis, presenting a diameter with Gaussian distribution with a median of 306 ± 103 nm. While the role of tissue-nonspecific alkaline phosphatase (TNAP) during mineralization involves hydrolysis of inorganic pyrophosphate (PPi), it is unclear how the microenvironment of MV may affect the ability of TNAP to hydrolyze the variety of substrates present at sites of mineralization. We show that the implants contain high levels of TNAP capable of hydrolyzing p-nitrophenylphosphate (pNPP), ATP and PPi. The catalytic properties of glycosyl phosphatidylinositol-anchored, polidocanol-solubilized and phosphatidylinositol-specific phospholipase C-released TNAP were compared using pNPP, ATP and PPi as substrates. While the enzymatic efficiency (k cat/Km) remained comparable between polidocanol-solubilized and membrane-bound TNAP for all three substrates, the k cat/Km for the phosphatidylinositol-specific phospholipase C-solubilized enzyme increased approximately 108-, 56-, and 556-fold for pNPP, ATP and PPi, respectively, compared to the membrane-bound enzyme. Our data are consistent with the involvement of MVs during ectopic calcification and also suggest that the location of TNAP on the membrane of MVs may play a role in determining substrate selectivity in this micro-compartment.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.
Resumo:
The consumers are becoming more concerned about food quality, especially regarding how, when and where the foods are produced (Haglund et al., 1999; Kahl et al., 2004; Alföldi, et al., 2006). Therefore, during recent years there has been a growing interest in the methods for food quality assessment, especially in the picture-development methods as a complement to traditional chemical analysis of single compounds (Kahl et al., 2006). The biocrystallization as one of the picture-developing method is based on the crystallographic phenomenon that when crystallizing aqueous solutions of dihydrate CuCl2 with adding of organic solutions, originating, e.g., from crop samples, biocrystallograms are generated with reproducible crystal patterns (Kleber & Steinike-Hartung, 1959). Its output is a crystal pattern on glass plates from which different variables (numbers) can be calculated by using image analysis. However, there is a lack of a standardized evaluation method to quantify the morphological features of the biocrystallogram image. Therefore, the main sakes of this research are (1) to optimize an existing statistical model in order to describe all the effects that contribute to the experiment, (2) to investigate the effect of image parameters on the texture analysis of the biocrystallogram images, i.e., region of interest (ROI), color transformation and histogram matching on samples from the project 020E170/F financed by the Federal Ministry of Food, Agriculture and Consumer Protection(BMELV).The samples are wheat and carrots from controlled field and farm trials, (3) to consider the strongest effect of texture parameter with the visual evaluation criteria that have been developed by a group of researcher (University of Kassel, Germany; Louis Bolk Institute (LBI), Netherlands and Biodynamic Research Association Denmark (BRAD), Denmark) in order to clarify how the relation of the texture parameter and visual characteristics on an image is. The refined statistical model was accomplished by using a lme model with repeated measurements via crossed effects, programmed in R (version 2.1.0). The validity of the F and P values is checked against the SAS program. While getting from the ANOVA the same F values, the P values are bigger in R because of the more conservative approach. The refined model is calculating more significant P values. The optimization of the image analysis is dealing with the following parameters: ROI(Region of Interest which is the area around the geometrical center), color transformation (calculation of the 1 dimensional gray level value out of the three dimensional color information of the scanned picture, which is necessary for the texture analysis), histogram matching (normalization of the histogram of the picture to enhance the contrast and to minimize the errors from lighting conditions). The samples were wheat from DOC trial with 4 field replicates for the years 2003 and 2005, “market samples”(organic and conventional neighbors with the same variety) for 2004 and 2005, carrot where the samples were obtained from the University of Kassel (2 varieties, 2 nitrogen treatments) for the years 2004, 2005, 2006 and “market samples” of carrot for the years 2004 and 2005. The criterion for the optimization was repeatability of the differentiation of the samples over the different harvest(years). For different samples different ROIs were found, which reflect the different pictures. The best color transformation that shows efficiently differentiation is relied on gray scale, i.e., equal color transformation. The second dimension of the color transformation only appeared in some years for the effect of color wavelength(hue) for carrot treated with different nitrate fertilizer levels. The best histogram matching is the Gaussian distribution. The approach was to find a connection between the variables from textural image analysis with the different visual criteria. The relation between the texture parameters and visual evaluation criteria was limited to the carrot samples, especially, as it could be well differentiated by the texture analysis. It was possible to connect groups of variables of the texture analysis with groups of criteria from the visual evaluation. These selected variables were able to differentiate the samples but not able to classify the samples according to the treatment. Contrarily, in case of visual criteria which describe the picture as a whole there is a classification in 80% of the sample cases possible. Herewith, it clearly can find the limits of the single variable approach of the image analysis (texture analysis).
Resumo:
Satellite-based rainfall monitoring is widely used for climatological studies because of its full global coverage but it is also of great importance for operational purposes especially in areas such as Africa where there is a lack of ground-based rainfall data. Satellite rainfall estimates have enormous potential benefits as input to hydrological and agricultural models because of their real time availability, low cost and full spatial coverage. One issue that needs to be addressed is the uncertainty on these estimates. This is particularly important in assessing the likely errors on the output from non-linear models (rainfall-runoff or crop yield) which make use of the rainfall estimates, aggregated over an area, as input. Correct assessment of the uncertainty on the rainfall is non-trivial as it must take account of • the difference in spatial support of the satellite information and independent data used for calibration • uncertainties on the independent calibration data • the non-Gaussian distribution of rainfall amount • the spatial intermittency of rainfall • the spatial correlation of the rainfall field This paper describes a method for estimating the uncertainty on satellite-based rainfall values taking account of these factors. The method involves firstly a stochastic calibration which completely describes the probability of rainfall occurrence and the pdf of rainfall amount for a given satellite value, and secondly the generation of ensemble of rainfall fields based on the stochastic calibration but with the correct spatial correlation structure within each ensemble member. This is achieved by the use of geostatistical sequential simulation. The ensemble generated in this way may be used to estimate uncertainty at larger spatial scales. A case study of daily rainfall monitoring in the Gambia, west Africa for the purpose of crop yield forecasting is presented to illustrate the method.
Resumo:
A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The problem of calculating the probability of error in a DS/SSMA system has been extensively studied for more than two decades. When random sequences are employed some conditioning must be done before the application of the central limit theorem is attempted, leading to a Gaussian distribution. The authors seek to characterise the multiple access interference as a random-walk with a random number of steps, for random and deterministic sequences. Using results from random-walk theory, they model the interference as a K-distributed random variable and use it to calculate the probability of error in the form of a series, for a DS/SSMA system with a coherent correlation receiver and BPSK modulation under Gaussian noise. The asymptotic properties of the proposed distribution agree with other analyses. This is, to the best of the authors' knowledge, the first attempt to propose a non-Gaussian distribution for the interference. The modelling can be extended to consider multipath fading and general modulation
Resumo:
Advances in hardware and software technologies allow to capture streaming data. The area of Data Stream Mining (DSM) is concerned with the analysis of these vast amounts of data as it is generated in real-time. Data stream classification is one of the most important DSM techniques allowing to classify previously unseen data instances. Different to traditional classifiers for static data, data stream classifiers need to adapt to concept changes (concept drift) in the stream in real-time in order to reflect the most recent concept in the data as accurately as possible. A recent addition to the data stream classifier toolbox is eRules which induces and updates a set of expressive rules that can easily be interpreted by humans. However, like most rule-based data stream classifiers, eRules exhibits a poor computational performance when confronted with continuous attributes. In this work, we propose an approach to deal with continuous data effectively and accurately in rule-based classifiers by using the Gaussian distribution as heuristic for building rule terms on continuous attributes. We show on the example of eRules that incorporating our method for continuous attributes indeed speeds up the real-time rule induction process while maintaining a similar level of accuracy compared with the original eRules classifier. We termed this new version of eRules with our approach G-eRules.
Resumo:
FeM2X4 spinels, where M is a transition metal and X is oxygen or sulfur, are candidate materials for spin filters, one of the key devices in spintronics. We present here a computational study of the inversion thermodynamics and the electronic structure of these (thio)spinels for M = Cr, Mn, Co, Ni, using calculations based on the density functional theory with on-site Hubbard corrections (DFT+U). The analysis of the configurational free energies shows that different behaviour is expected for the equilibrium cation distributions in these structures: FeCr2X4 and FeMn2S4 are fully normal, FeNi2X4 and FeCo2S4 are intermediate, and FeCo2O4 and FeMn2O4 are fully inverted. We have analyzed the role played by the size of the ions and by the crystal field stabilization effects in determining the equilibrium inversion degree. We also discuss how the electronic and magnetic structure of these spinels is modified by the degree of inversion, assuming that this could be varied from the equilibrium value. We have obtained electronic densities of states for the completely normal and completely inverse cation distribution of each compound. FeCr2X4, FeMn2X4, FeCo2O4 and FeNi2O4 are half-metals in the ferrimagnetic state when Fe is in tetrahedral positions. When M is filling the tetrahedral positions, the Cr-containing compounds and FeMn2O4 are half-metallic systems, while the Co and Ni spinels are insulators. The Co and Ni sulfide counterparts are metallic for any inversion degree together with the inverse FeMn2S4. Our calculations suggest that the spin filtering properties of the FeM2X4 (thio)spinels could be modified via the control of the cation distribution through variations in the synthesis conditions.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
We consider random generalizations of a quantum model of infinite range introduced by Emch and Radin. The generalizations allow a neat extension from the class l (1) of absolutely summable lattice potentials to the optimal class l (2) of square summable potentials first considered by Khanin and Sinai and generalised by van Enter and van Hemmen. The approach to equilibrium in the case of a Gaussian distribution is proved to be faster than for a Bernoulli distribution for both short-range and long-range lattice potentials. While exponential decay to equilibrium is excluded in the nonrandom l (1) case, it is proved to occur for both short and long range potentials for Gaussian distributions, and for potentials of class l (2) in the Bernoulli case. Open problems are discussed.
Resumo:
We present experimental evidence of the existence of cell variability in terms of threshold light dose for Hep G2 (liver cancer cells) cultured. Using a theoretical model to describe the effects caused by successive photodynamic therapy (PDT) sessions, and based on the consequences of a partial response we introduce the threshold dose distribution concept within a tumor. The experimental model consists in a stack of flasks, and simulates subsequent layers of a tissue exposed to PDT application. The result indicates that cells from the same culture could respond in different ways to similar PDT induced-damages. Moreover, the consequence is a partial killing of the cells submitted to PDT, and the death fraction decreased at each in vitro PDT session. To demonstrate the occurrence of cell population modification as a response to PDT, we constructed a simple theoretical model and assumed that the threshold dose distribution for a cell population of a tumor is represented by a modified Gaussian distribution.
Resumo:
Uma forma interessante para uma companhia que pretende assumir uma posição comprada em suas próprias ações ou lançar futuramente um programa de recompra de ações, mas sem precisar dispor de caixa ou ter que contratar um empréstimo, ou então se protegendo de uma eventual alta no preço das ações, é através da contratação de um swap de ações. Neste swap, a companhia fica ativa na variação de sua própria ação enquanto paga uma taxa de juros pré ou pós-fixada. Contudo, este tipo de swap apresenta risco wrong-way, ou seja, existe uma dependência positiva entre a ação subjacente do swap e a probabilidade de default da companhia, o que precisa ser considerado por um banco ao precificar este tipo de swap. Neste trabalho propomos um modelo para incorporar a dependência entre probabilidades de default e a exposição à contraparte no cálculo do CVA para este tipo de swap. Utilizamos um processo de Cox para modelar o instante de ocorrência de default, dado que a intensidade estocástica de default segue um modelo do tipo CIR, e assumindo que o fator aleatório presente na ação subjacente e que o fator aleatório presente na intensidade de default são dados conjuntamente por uma distribuição normal padrão bivariada. Analisamos o impacto no CVA da incorporação do riscowrong-way para este tipo de swap com diferentes contrapartes, e para diferentes prazos de vencimento e níveis de correlação.
Resumo:
Endochondral calcification involves the participation of matrix vesicles (MVs), but it remains unclear whether calcification ectopically induced by implants of demineralized bone matrix also proceeds via MVs. Ectopic bone formation was induced by implanting rat demineralized diaphyseal bone matrix into the dorsal subcutaneous tissue of Wistar rats and was examined histologically and biochemically. Budding of MVs from chondrocytes was observed to serve as nucleation sites for mineralization during induced ectopic osteogenesis, presenting a diameter with Gaussian distribution with a median of 306 ± 103 nm. While the role of tissue-nonspecific alkaline phosphatase (TNAP) during mineralization involves hydrolysis of inorganic pyrophosphate (PPi), it is unclear how the microenvironment of MV may affect the ability of TNAP to hydrolyze the variety of substrates present at sites of mineralization. We show that the implants contain high levels of TNAP capable of hydrolyzing p-nitrophenylphosphate (pNPP), ATP and PPi. The catalytic properties of glycosyl phosphatidylinositol-anchored, polidocanol-solubilized and phosphatidylinositol-specific phospholipase C-released TNAP were compared using pNPP, ATP and PPi as substrates. While the enzymatic efficiency (k cat/Km) remained comparable between polidocanol-solubilized and membrane-bound TNAP for all three substrates, the k cat/Km for the phosphatidylinositol-specific phospholipase C-solubilized enzyme increased approximately 108-, 56-, and 556-fold for pNPP, ATP and PPi, respectively, compared to the membrane-bound enzyme. Our data are consistent with the involvement of MVs during ectopic calcification and also suggest that the location of TNAP on the membrane of MVs may play a role in determining substrate selectivity in this micro-compartment.