988 resultados para log Gaussian Cox process
Resumo:
Changepoint analysis is a well established area of statistical research, but in the context of spatio-temporal point processes it is as yet relatively unexplored. Some substantial differences with regard to standard changepoint analysis have to be taken into account: firstly, at every time point the datum is an irregular pattern of points; secondly, in real situations issues of spatial dependence between points and temporal dependence within time segments raise. Our motivating example consists of data concerning the monitoring and recovery of radioactive particles from Sandside beach, North of Scotland; there have been two major changes in the equipment used to detect the particles, representing known potential changepoints in the number of retrieved particles. In addition, offshore particle retrieval campaigns are believed may reduce the particle intensity onshore with an unknown temporal lag; in this latter case, the problem concerns multiple unknown changepoints. We therefore propose a Bayesian approach for detecting multiple changepoints in the intensity function of a spatio-temporal point process, allowing for spatial and temporal dependence within segments. We use Log-Gaussian Cox Processes, a very flexible class of models suitable for environmental applications that can be implemented using integrated nested Laplace approximation (INLA), a computationally efficient alternative to Monte Carlo Markov Chain methods for approximating the posterior distribution of the parameters. Once the posterior curve is obtained, we propose a few methods for detecting significant change points. We present a simulation study, which consists in generating spatio-temporal point pattern series under several scenarios; the performance of the methods is assessed in terms of type I and II errors, detected changepoint locations and accuracy of the segment intensity estimates. We finally apply the above methods to the motivating dataset and find good and sensible results about the presence and quality of changes in the process.
Resumo:
Uma forma interessante para uma companhia que pretende assumir uma posição comprada em suas próprias ações ou lançar futuramente um programa de recompra de ações, mas sem precisar dispor de caixa ou ter que contratar um empréstimo, ou então se protegendo de uma eventual alta no preço das ações, é através da contratação de um swap de ações. Neste swap, a companhia fica ativa na variação de sua própria ação enquanto paga uma taxa de juros pré ou pós-fixada. Contudo, este tipo de swap apresenta risco wrong-way, ou seja, existe uma dependência positiva entre a ação subjacente do swap e a probabilidade de default da companhia, o que precisa ser considerado por um banco ao precificar este tipo de swap. Neste trabalho propomos um modelo para incorporar a dependência entre probabilidades de default e a exposição à contraparte no cálculo do CVA para este tipo de swap. Utilizamos um processo de Cox para modelar o instante de ocorrência de default, dado que a intensidade estocástica de default segue um modelo do tipo CIR, e assumindo que o fator aleatório presente na ação subjacente e que o fator aleatório presente na intensidade de default são dados conjuntamente por uma distribuição normal padrão bivariada. Analisamos o impacto no CVA da incorporação do riscowrong-way para este tipo de swap com diferentes contrapartes, e para diferentes prazos de vencimento e níveis de correlação.
Resumo:
Objective. The purpose of this study was to estimate the Down syndrome detection and false-positive rates for second-trimester sonographic prenasal thickness (PT) measurement alone and in combination with other markers. Methods. Multivariate log Gaussian modeling was performed using numerical integration. Parameters for the PT distribution, in multiples of the normal gestation-specific median (MoM), were derived from 105 Down syndrome and 1385 unaffected pregnancies scanned at 14 to 27 weeks. The data included a new series of 25 cases and 535 controls combined with 4 previously published series. The means were estimated by the median and the SDs by the 10th to 90th range divided by 2.563. Parameters for other markers were obtained from the literature. Results. A log Gaussian model fitted the distribution of PT values well in Down syndrome and unaffected pregnancies. The distribution parameters were as follows: Down syndrome, mean, 1.334 MoM; log(10) SD, 0.0772; unaffected pregnancies, 0.995 and 0.0752, respectively. The model-predicted detection rates for 1%, 3%, and 5% false-positive rates for PT alone were 35%, 51%, and 60%, respectively. The addition of PT to a 4 serum marker protocol increased detection by 14% to 18% compared with serum alone. The simultaneous sonographic measurement of PT and nasal bone length increased detection by 19% to 26%, and with a third sonographic marker, nuchal skin fold, performance was comparable with first-trimester protocols. Conclusions. Second-trimester screening with sonographic PT and serum markers is predicted to have a high detection rate, and further sonographic markers could perform comparably with first-trimester screening protocols.
Resumo:
We propose a low complexity technique to generate amplitude correlated time-series with Nakagami-m distribution and phase correlated Gaussian-distributed time-series, which is useful for the simulation of ionospheric scintillation effects in GNSS signals. To generate a complex scintillation process, the technique requires solely the knowledge of parameters Sa (scintillation index) and σφ (phase standard deviation) besides the definition of models for the amplitude and phase power spectra. The concatenation of two nonlinear memoryless transformations is used to produce a Nakagami-distributed amplitude signal from a Gaussian autoregressive process.
Resumo:
We develop an algorithm to simulate a Gaussian stochastic process that is non-¿-correlated in both space and time coordinates. The colored noise obeys a linear reaction-diffusion Langevin equation with Gaussian white noise. This equation is exactly simulated in a discrete Fourier space.
Resumo:
Työn tavoitteena oli tutkia älykkäiden ohjausjärjestelmien käyttöä mekatronisen koneen väsymiskeston parantamisessa. Älykkäiden järjestelmien osalta työssä keskityttiin lähinnä neuroverkkojen ja sumean logiikan mahdollisuuksien tutkimiseen. Tämän lisäksi työssä kehitettiin väsymiskestoikää lisäävä älykkäisiin järjestelmiin perustuva ohjausalgoritmi. Ohjausalgoritmi liitettiin osaksi puutavarakuormaimen ohjausta. Ohjaimen kehittely suoritettiin aluksi simulointimallien avulla. Laajemmat ohjaimen testaukset suoritettiin laboratoriossa fyysisen prototyypin avulla. Tuloksena puutavarakuormaimen puomin väsymiskestoikäennuste saatiin moninkertaistettua. Väsymiskestoiän parantumisen lisäksi ohjainalgoritmi myös vaimentaa kuormaimen värähtelyä.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
BACKGROUND Methylentetrahydrofolate reductase (MTHFR) plays a major role in folate metabolism and consequently could be an important factor for the efficacy of a treatment with 5-fluorouracil. Our aim was to evaluate the prognostic and predictive value of two well characterized constitutional MTHFR gene polymorphisms for primarily resected and neoadjuvantly treated esophagogastric adenocarcinomas. METHODS 569 patients from two centers were analyzed (gastric cancer: 218, carcinoma of the esophagogastric junction (AEG II, III): 208 and esophagus (AEG I): 143). 369 patients received neoadjuvant chemotherapy followed by surgery, 200 patients were resected without preoperative treatment. The MTHFR C677T and A1298C polymorphisms were determined in DNA from peripheral blood lymphozytes. Associations with prognosis, response and clinicopathological factors were analyzed retrospectively within a prospective database (chi-square, log-rank, cox regression). RESULTS Only the MTHFR A1298C polymorphisms had prognostic relevance in neoadjuvantly treated patients but it was not a predictor for response to neoadjuvant chemotherapy. The AC genotype of the MTHFR A1298C polymorphisms was significantly associated with worse outcome (p = 0.02, HR 1.47 (1.06-2.04). If neoadjuvantly treated patients were analyzed based on their tumor localization, the AC genotype of the MTHFR A1298C polymorphisms was a significant negative prognostic factor in patients with gastric cancer according to UICC 6th edition (gastric cancer including AEG type II, III: HR 2.0, 95% CI 1.3-2.0, p = 0.001) and 7th edition (gastric cancer without AEG II, III: HR 2.8, 95% CI 1.5-5.7, p = 0.003), not for AEG I. For both definitions of gastric cancer the AC genotype was confirmed as an independent negative prognostic factor in cox regression analysis. In primarily resected patients neither the MTHFR A1298C nor the MTHFR C677T polymorphisms had prognostic impact. CONCLUSIONS The MTHFR A1298C polymorphisms was an independent prognostic factor in patients with neoadjuvantly treated gastric adenocarcinomas (according to both UICC 6th or 7th definitions for gastric cancer) but not in AEG I nor in primarily resected patients, which confirms the impact of this enzyme on chemotherapy associated outcome.
Resumo:
Wireless sensor networks (WSNs) consist of thousands of nodes that need to communicate with each other. However, it is possible that some nodes are isolated from other nodes due to limited communication range. This paper focuses on the influence of communication range on the probability that all nodes are connected under two conditions, respectively: (1) all nodes have the same communication range, and (2) communication range of each node is a random variable. In the former case, this work proves that, for 0menor queepsmenor quee^(-1) , if the probability of the network being connected is 0.36eps , by means of increasing communication range by constant C(eps) , the probability of network being connected is at least 1-eps. Explicit function C(eps) is given. It turns out that, once the network is connected, it also makes the WSNs resilient against nodes failure. In the latter case, this paper proposes that the network connection probability is modeled as Cox process. The change of network connection probability with respect to distribution parameters and resilience performance is presented. Finally, a method to decide the distribution parameters of node communication range in order to satisfy a given network connection probability is developed.
Resumo:
Projection of a high-dimensional dataset onto a two-dimensional space is a useful tool to visualise structures and relationships in the dataset. However, a single two-dimensional visualisation may not display all the intrinsic structure. Therefore, hierarchical/multi-level visualisation methods have been used to extract more detailed understanding of the data. Here we propose a multi-level Gaussian process latent variable model (MLGPLVM). MLGPLVM works by segmenting data (with e.g. K-means, Gaussian mixture model or interactive clustering) in the visualisation space and then fitting a visualisation model to each subset. To measure the quality of multi-level visualisation (with respect to parent and child models), metrics such as trustworthiness, continuity, mean relative rank errors, visualisation distance distortion and the negative log-likelihood per point are used. We evaluate the MLGPLVM approach on the ‘Oil Flow’ dataset and a dataset of protein electrostatic potentials for the ‘Major Histocompatibility Complex (MHC) class I’ of humans. In both cases, visual observation and the quantitative quality measures have shown better visualisation at lower levels.
Resumo:
In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Dissertação para obtenção do Grau de Doutor em Estatística e Gestão do Risco
Resumo:
Presented is an accurate swimming velocity estimation method using an inertial measurement unit (IMU) by employing a simple biomechanical constraint of motion along with Gaussian process regression to deal with sensor inherent errors. Experimental validation shows a velocity RMS error of 9.0 cm/s and high linear correlation when compared with a commercial tethered reference system. The results confirm the practicality of the presented method to estimate swimming velocity using a single low-cost, body-worn IMU.
Resumo:
A class identification algorithms is introduced for Gaussian process(GP)models.The fundamental approach is to propose a new kernel function which leads to a covariance matrix with low rank,a property that is consequently exploited for computational efficiency for both model parameter estimation and model predictions.The objective of either maximizing the marginal likelihood or the Kullback–Leibler (K–L) divergence between the estimated output probability density function(pdf)and the true pdf has been used as respective cost functions.For each cost function,an efficient coordinate descent algorithm is proposed to estimate the kernel parameters using a one dimensional derivative free search, and noise variance using a fast gradient descent algorithm. Numerical examples are included to demonstrate the effectiveness of the new identification approaches.