916 resultados para Long memory stochastic process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The follow-up care for women with breast cancer requires an understanding of disease recurrence patterns and the follow-up visit schedule should be determined according to the times when the recurrence are most likely to occur, so that preventive measure can be taken to avoid or minimize the recurrence. Objective: To model breast cancer recurrence through stochastic process with an aim to generate a hazard function for determining a follow-up schedule. Methods: We modeled the process of disease progression as the time transformed Weiner process and the first-hitting-time was used as an approximation of the true failure time. The women's "recurrence-free survival time" or a "not having the recurrence event" is modeled by the time it takes Weiner process to cross a threshold value which represents a woman experiences breast cancer recurrence event. We explored threshold regression model which takes account of covariates that contributed to the prognosis of breast cancer following development of the first-hitting time model. Using real data from SEER-Medicare, we proposed models of follow-up visits schedule on the basis of constant probability of disease recurrence between consecutive visits. Results: We demonstrated that the threshold regression based on first-hitting-time modeling approach can provide useful predictive information about breast cancer recurrence. Our results suggest the surveillance and follow-up schedule can be determined for women based on their prognostic factors such as tumor stage and others. Women with early stage of disease may be seen less frequently for follow-up visits than those women with locally advanced stages. Our results from SEER-Medicare data support the idea of risk-controlled follow-up strategies for groups of women. Conclusion: The methodology we proposed in this study allows one to determine individual follow-up scheduling based on a parametric hazard function that incorporates known prognostic factors.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to present a program written in Matlab-Octave for the simulation of the time evolution of student curricula, i.e, how students pass their subjects along time until graduation. The program computes, from the simulations, the academic performance rates for the subjects of the study plan for each semester as well as the overall rates, which are a) the efficiency rate defined as the ratio of the number of students passing the exam to the number of students who registered for it and b) the success rate, defined as the ratio of the number of students passing the exam to the number of students who not only registered for it but also actually took it. Additionally, we compute the rates for the bachelor academic degree which are established for Spain by the National Quality Evaluation and Accreditation Agency (ANECA) and which are the graduation rate (measured as the percentage of students who finish as scheduled in the plan or taking an extra year) and the efficiency rate (measured as the percentage of credits which a student who graduated has really taken). The simulation is done in terms of the probabilities of passing all the subjects in their study plan. The application of the simulator to Polytech students in Madrid, where requirements for passing are specially stiff in first and second year subjects, is particularly relevant to analyze student cohorts and the probabilities of students finishing in the minimum of four years, or taking and extra year or two extra years, and so forth. It is a very useful tool when designing new study plans. The calculation of the probability distribution of the random variable "number of semesters a student has taken to complete the curricula and graduate" is difficult or even unfeasible to obtain analytically, and this is even truer when we incorporate uncertainty in parameter estimation. This is why we apply Monte Carlo simulation which not only provides illustration of the stochastic process but also a method for computation. The stochastic simulator is proving to be a useful tool for identification of the subjects most critical in the distribution of the number of semesters for curriculum vitae (CV) completion and subsequently for a decision making process in terms of CV planning and passing standards in the University. Simulations are performed through a graphical interface where also the results are presented in appropriate figures. The Project has been funded by the Call for Innovation in Education Projects of Universidad Politécnica de Madrid (UPM) through a Project of its school Escuela Técnica Superior de Ingenieros Industriales ETSII during the period September 2010-September 2011.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surface displacement at the dykes of La Pedrera reservoir (SE Spain) has been measured by satellite differential Synthetic Aperture Radar (SAR) interferometry. At the main dyke, a displacement of about 13 cm along the satellite line of sight has been estimated between August 1995 and May 2010, from a dataset composed by ERS-1, ERS-2 and Envisat-ASAR images. Two independent short-term processing tasks were also carried out with ERS-2/Envisat-ASAR (from June 2008 to May 2010) and TerraSAR-X (from August 2008 to June 2010) images which have shown similar spatial and temporal displacement patterns. The joint analysis of historical instrument surveys and DInSAR-derived data has allowed the identification of a long-term deformation process which is reflected at the dam's surface and is also clearly recognizable in the inspection gallery. The plausible causes of the displacements measured by DInSAR are also discussed in the paper. Finally, DInSAR data have been used to compute the long-term settlement of La Pedrera dam, showing a good agreement with external studies. Consequently, this work demonstrates the integration of DInSAR with in-situ techniques which helps provide a complete spatial vision of the displacements in the dam thereby helping to differentiate the causal mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a genetic programming (GP) approach for evolving genetic networks that demonstrate desired dynamics when simulated as a discrete stochastic process. Our representation of genetic networks is based on a biochemical reaction model including key elements such as transcription, translation and post-translational modifications. The stochastic, reaction-based GP system is similar but not identical with algorithmic chemistries. We evolved genetic networks with noisy oscillatory dynamics. The results show the practicality of evolving particular dynamics in gene regulatory networks when modelled with intrinsic noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deformation microstructures in two batches of commercially pure copper (A and B) of allnost similar composition have been studied after rolling reductions from 5% to 95%. X- ray diffraction, optical metallography, scanning electron microscopy in the back-scattered mode, transmission and scanning electron microscopy have been used to examine the deformation microstructure. At low strains (~10 %) the deformation is accommodated by uniform octahedral slip. Microbands that occur as sheet like features usually on the {111} slip planes are formed after 10% reduction. The misorientations between rnicrobonds ond the matrix are usually small (1 - 2° ) and the dislocations within the bands suggest that a single slip system has been operative. The number of microbands increases with strain, they start to cluster and rotate after 60% reduction and, after 90 %, they become almost perfectly aligned with the rolling direction. There were no detectable differences in deformation microstructure between the two materials up to a deformation level of 60% but subsequently, copper B started to develop shear bands which became very profuse by 90% reduction. By contrast, copper A at this stage of deformation developed a smooth laminated structure. This difference in the deformation microstructures has been attributed to traces of unknown impurity in D which inhibit recovery of work hardening. The preferred orientations of both were typical of deformed copper although the presence of shear bands was associated wth a slightly weaker texture. The effects of rolling temperature and grain size on deformation microstructure were also investigated. It was concluded that lowering the rolling temperature or increasing the initial grain size encourages the material to develop shear bands after heavy deformation. Recovery and recrystallization have been studied in both materials during annealing. During recrystallization the growth of new grains showed quite different characteristics in the two cases. Where shear bands were present these acted as nucleation sites and produced a wide spread of recrystallized grain orientations. The resulting annealing textures were very weak. In the absence of shear bands, nucleation occurs by a remarkably long range bulging process which creates the cube orientation and an intensely sharp annealing texture. Cube oriented regions occur in long bands of highly elongated and well recovered cells which contain long range cumulative micorientations. They are transition bands with structural characteristics ideally suited for nucleation of recrystallization. Shear banding inhibits the cube texture both by creating alternative nuclei and by destroying the microstructural features necessary for cube nucleation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 65C05.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary 60J80, Secondary 60G99.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dolgozatban a hitelderivatívák intenzitásalapú modellezésének néhány kérdését vizsgáljuk meg. Megmutatjuk, hogy alkalmas mértékcserével nemcsak a duplán sztochasztikus folyamatok, hanem tetszőleges intenzitással rendelkező pontfolyamat esetén is kiszámolható az összetett kár- és csődfolyamat eloszlásának Laplace-transzformáltja. _____ The paper addresses questions concerning the use of intensity based modeling in the pricing of credit derivatives. As the specification of the distribution of the lossprocess is a non-trivial exercise, the well-know technique for this task utilizes the inversion of the Laplace-transform. A popular choice for the model is the class of doubly stochastic processes given that their Laplace-transforms can be determined easily. Unfortunately these processes lack several key features supported by the empirical observations, e.g. they cannot replicate the self-exciting nature of defaults. The aim of the paper is to show that by using an appropriate change of measure the Laplace-transform can be calculated not only for a doubly stochastic process, but for an arbitrary point process with intensity as well. To support the application of the technique, we investigate the e®ect of the change of measure on the stochastic nature of the underlying process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O advento da internet causou uma revolução na forma como a sociedade se relaciona. A consolidação das mídias sociais na ambiência digital acentuou o poder das mudanças e forçou a comunicação a rever paradigmas. O imediatismo e a velocidade com que a informação se propaga num processo simétrico de mão dupla – emissor e receptor – mudou a forma de trabalhar, pensar e planejar. O presente trabalho traz uma pesquisa com profissionais de comunicação e analisa como o fator prazo tem impactado no processo do planejamento de longo prazo – tradicionalmente anual – das ações voltadas para o ambiente digital. A pesquisa baseou-se em amplo referencial teórico das áreas de comunicação, marketing, redes e mídias sociais, tecnologia, administração, além de institutos de pesquisas e empresas. A fim de descrever as experiências vividas pelos profissionais, empreendemos ainda uma pesquisa qualitativa com entrevistas em profundidade, com amostra não probabilística, com foco nas disciplinas de marketing e propaganda e relações públicas. Os resultados apontam para um aprendizado ainda sendo conquistado, dia após dia, a partir de tentativas e erros, onde a preocupação dos profissionais fica dividida entre o prazo de antecedência com que é feito um planejamento e a obrigatoriedade de sua revisão contínua.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval $[0,1]$ with dependence on a single parameter, $\lambda$. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on $\lambda$ and the behavior of the initial data around $1$. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigated the dance piece Caldo da Cana which premiered in João Pessoa-PB, in September 1984. It problematizes theoretical and practical way to make dance history, investigating this possibility through the dance itself. Arises from the fact that the dance has specific characteristics that can not be neglected by the historical account. In this sense, it was initiated by the raising of a theoretical framework that speaks to the indicated issues and unfolds through a field study, which included collecting testimonies from people who participated in the show, documentary research, gathering material traces and finally consists a practical part by transposing elaborated historical knowledge into the body through a creative process, resulting on this moment, in the construction of a duo dance