947 resultados para Random coefficient logit (RCL) model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessing the fit of a model is an important final step in any statistical analysis, but this is not straightforward when complex discrete response models are used. Cross validation and posterior predictions have been suggested as methods to aid model criticism. In this paper a comparison is made between four methods of model predictive assessment in the context of a three level logistic regression model for clinical mastitis in dairy cattle; cross validation, a prediction using the full posterior predictive distribution and two “mixed” predictive methods that incorporate higher level random effects simulated from the underlying model distribution. Cross validation is considered a gold standard method but is computationally intensive and thus a comparison is made between posterior predictive assessments and cross validation. The analyses revealed that mixed prediction methods produced results close to cross validation whilst the full posterior predictive assessment gave predictions that were over-optimistic (closer to the observed disease rates) compared with cross validation. A mixed prediction method that simulated random effects from both higher levels was best at identifying the outlying level two (farm-year) units of interest. It is concluded that this mixed prediction method, simulating random effects from both higher levels, is straightforward and may be of value in model criticism of multilevel logistic regression, a technique commonly used for animal health data with a hierarchical structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with a stochastic SIR (susceptible-infective-removed) model for the spread of an epidemic amongst a population of individuals, with a random network of social contacts, that is also partitioned into households. The behaviour of the model as the population size tends to infinity in an appropriate fashion is investigated. A threshold parameter which determines whether or not an epidemic with few initial infectives can become established and lead to a major outbreak is obtained, as are the probability that a major outbreak occurs and the expected proportion of the population that are ultimately infected by such an outbreak, together with methods for calculating these quantities. Monte Carlo simulations demonstrate that these asymptotic quantities accurately reflect the behaviour of finite populations, even for only moderately sized finite populations. The model is compared and contrasted with related models previously studied in the literature. The effects of the amount of clustering present in the overall population structure and the infectious period distribution on the outcomes of the model are also explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a stochastic SIR (susceptible-infective-removed) epidemic model in which individuals may make infectious contacts in two ways, both within 'households' (which for ease of exposition are assumed to have equal size) and along the edges of a random graph describing additional social contacts. Heuristically-motivated branching process approximations are described, which lead to a threshold parameter for the model and methods for calculating the probability of a major outbreak, given few initial infectives, and the expected proportion of the population who are ultimately infected by such a major outbreak. These approximate results are shown to be exact as the number of households tends to infinity by proving associated limit theorems. Moreover, simulation studies indicate that these asymptotic results provide good approximations for modestly-sized finite populations. The extension to unequal sized households is discussed briefly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial funding for open access provided by the UMD Libraries' Open Access Publishing Fund.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By Monte Carlo simulations, we study the character of the spinglass (SG) phase in dense disordered packings of magnetic nanoparticles (NPs). We focus on NPs which have large uniaxial anisotropies and can be well represented as Ising dipoles. Dipoles are placed on SC lattices and point along randomly oriented axes. From the behaviour of a SG correlation length we determine the transition temperature Tc between the paramagnetic and a SG phase. For temperatures well below Tc we find distributions of the SG overlap parameter q that are strongly sample-dependent and exhibit several spikes. We find that the average width of spikes, and the fraction of samples with spikes higher than a certain threshold does not vary appreciably with the system sizes studied. We compare these results with the ones found previously for 3D site-diluted systems of parallel Ising dipoles and with the behaviour of the Sherrington-Kirkpatrick model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study provides a methodology that gives a predictive character the computer simulations based on detailed models of the geometry of a porous medium. We using the software FLUENT to investigate the flow of a viscous Newtonian fluid through a random fractal medium which simplifies a two-dimensional disordered porous medium representing a petroleum reservoir. This fractal model is formed by obstacles of various sizes, whose size distribution function follows a power law where exponent is defined as the fractal dimension of fractionation Dff of the model characterizing the process of fragmentation these obstacles. They are randomly disposed in a rectangular channel. The modeling process incorporates modern concepts, scaling laws, to analyze the influence of heterogeneity found in the fields of the porosity and of the permeability in such a way as to characterize the medium in terms of their fractal properties. This procedure allows numerically analyze the measurements of permeability k and the drag coefficient Cd proposed relationships, like power law, for these properties on various modeling schemes. The purpose of this research is to study the variability provided by these heterogeneities where the velocity field and other details of viscous fluid dynamics are obtained by solving numerically the continuity and Navier-Stokes equations at pore level and observe how the fractal dimension of fractionation of the model can affect their hydrodynamic properties. This study were considered two classes of models, models with constant porosity, MPC, and models with varying porosity, MPV. The results have allowed us to find numerical relationship between the permeability, drag coefficient and the fractal dimension of fractionation of the medium. Based on these numerical results we have proposed scaling relations and algebraic expressions involving the relevant parameters of the phenomenon. In this study analytical equations were determined for Dff depending on the geometrical parameters of the models. We also found a relation between the permeability and the drag coefficient which is inversely proportional to one another. As for the difference in behavior it is most striking in the classes of models MPV. That is, the fact that the porosity vary in these models is an additional factor that plays a significant role in flow analysis. Finally, the results proved satisfactory and consistent, which demonstrates the effectiveness of the referred methodology for all applications analyzed in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generating sample models for testing a model transformation is no easy task. This paper explores the use of classifying terms and stratified sampling for developing richer test cases for model transformations. Classifying terms are used to define the equivalence classes that characterize the relevant subgroups for the test cases. From each equivalence class of object models, several representative models are chosen depending on the required sample size. We compare our results with test suites developed using random sampling, and conclude that by using an ordered and stratified approach the coverage and effectiveness of the test suite can be significantly improved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the presented paper, the temporal and statistical properties of a Lyot filter based multiwavelength random DFB fiber laser with a wide flat spectrum, consisting of individual lines, were investigated. It was shown that separate spectral lines forming the laser spectrum have mostly Gaussian statistics and so represent stochastic radiation, but at the same time the entire radiation is not fully stochastic. A simple model, taking into account phenomenological correlations of the lines' initial phases was established. Radiation structure in the experiment and simulation proved to be different, demanding interactions between different lines to be described via a NLSE-based model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random Walk with Restart (RWR) is an appealing measure of proximity between nodes based on graph structures. Since real graphs are often large and subject to minor changes, it is prohibitively expensive to recompute proximities from scratch. Previous methods use LU decomposition and degree reordering heuristics, entailing O(|V|^3) time and O(|V|^2) memory to compute all (|V|^2) pairs of node proximities in a static graph. In this paper, a dynamic scheme to assess RWR proximities is proposed: (1) For unit update, we characterize the changes to all-pairs proximities as the outer product of two vectors. We notice that the multiplication of an RWR matrix and its transition matrix, unlike traditional matrix multiplications, is commutative. This can greatly reduce the computation of all-pairs proximities from O(|V|^3) to O(|delta|) time for each update without loss of accuracy, where |delta| (<<|V|^2) is the number of affected proximities. (2) To avoid O(|V|^2) memory for all pairs of outputs, we also devise efficient partitioning techniques for our dynamic model, which can compute all pairs of proximities segment-wisely within O(l|V|) memory and O(|V|/l) I/O costs, where 1<=l<=|V| is a user-controlled trade-off between memory and I/O costs. (3) For bulk updates, we also devise aggregation and hashing methods, which can discard many unnecessary updates further and handle chunks of unit updates simultaneously. Our experimental results on various datasets demonstrate that our methods can be 1–2 orders of magnitude faster than other competitors while securing scalability and exactness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: La artritis reumatoide (AR) es una enfermedad autoinmune-inflamatoria, que compromete las articulaciones diartrodiales. Tiene una importante repercusión sistémica que incluye la depresión; por lo tanto, tiene un severo impacto sobre la calidad de vida. Es posible que mecanismos de defensa, tales como la resiliencia, puedan amortiguar dicho impacto. Metodología: estudio de corte transversal, multicéntrico (análisis inicial dentro del grupo AR, con muestra no probabilística de 66 pacientes, posterior selección aleatoria simple de 16 pacientes de la muestra inicial y selección de 16 individuos sanos pareados). Posteriormente, se comparó la resiliencia entre sujetos con AR y sujetos sanos, mediante las escalas RS y CD-RISC25. Adicionalmente, se aplicaron las escalas EEAE, EADZ, SF-36 y PANAS. Los datos fueron evaluados mediante el coeficiente de correlación de Spearman, las pruebas U Mann-Whitney, Kruskall-Wallis, T de Student y análisis de varianza. Resultados: se encontraron diferencias significativas en las estrategias de afrontamiento no espirituales en grupos de resiliencia baja, media y alta; diferencias en las medianas de resiliencia en los grupos de depresión por EAZD en los pacientes. No se encontraron resultados significativos en las variables clínicas de la AR ni en la comparación con sujetos sanos. Conclusiones: el uso de estrategias de afrontamiento no espirituales y la ausencia de depresión, se asoció a mayores niveles de resiliencia en los pacientes con AR, por lo cual, los componentes emocionales y cognitivos se asocian a la resiliencia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determinar la validez concurrente del Sistema de Observación de Tiempo de Instrucción de Condición Física (SOFIT) a través de acelerometría, como método para medir los niveles de actividad física (AF) de los escolares de 1º a 9º durante la clase de educación física en tres colegios públicos de Bogotá, Colombia. Estudio transversal entre Octubre de 2014 y Marzo de 2015. La medición se realizó en tres colegios públicos de Bogotá. Participaron 48 estudiantes (25 niñas; 23 niños), entre 5 y 17 años, seleccionados de acuerdo al protocolo de SOFIT. El resultado se categoriza en porcentaje de tiempo en comportamiento sedentario, AF moderada, AF vigorosa, y AF moderada a vigorosa. Se validó utilizando como patrón de oro la acelerometría en las mismas categorías. Se realizó diferencia de medias, regresión lineal y modelo de efectos fijos. La correlación entre SOFIT y acelerometría fue buena para AF moderada (rho=,958; p=0,000), AF vigorosa (rho=,937; p=0,000) y AF de moderada a vigorosa (rho=0,962; p=0,000). Al igual que utilizando un modelo de efectos fijos, AF moderada (β1=0,92; p=0,00), vigorosa (β1=0,94; p=0,00) y AF de moderada a vigorosa (β1=0,95; p=0,00), mostrando ausencia de diferencias significativas entre los dos métodos para la medición de los niveles de AF. El comportamiento sedentario correlacionó positivamente en Spearman (rho=,0965; p=0,000), El sistema SOFIT demostró ser válido para medir niveles de AF en clases de educación física, tras buena correlación y concordancia con acelerometría. SOFIT es un instrumento de fácil acceso y de bajo costo para la medición de la AF durante las clases de educación física en el contexto escolar y se recomienda su uso en futuros estudios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A specific modified constitutive equation for a third-grade fluid is proposed so that the model be suitable for applications where shear-thinning or shear-thickening may occur. For that, we use the Cosserat theory approach reducing the exact three-dimensional equations to a system depending only on time and on a single spatial variable. This one-dimensional system is obtained by integrating the linear momentum equation over the cross-section of the tube, taking a velocity field approximation provided by the Cosserat theory. From this reduced system, we obtain the unsteady equations for the wall shear stress and mean pressure gradient depending on the volume flow rate, Womersley number, viscoelastic coefficient and flow index over a finite section of the tube geometry with constant circular cross-section.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le tecniche di Machine Learning sono molto utili in quanto consento di massimizzare l’utilizzo delle informazioni in tempo reale. Il metodo Random Forests può essere annoverato tra le tecniche di Machine Learning più recenti e performanti. Sfruttando le caratteristiche e le potenzialità di questo metodo, la presente tesi di dottorato affronta due casi di studio differenti; grazie ai quali è stato possibile elaborare due differenti modelli previsionali. Il primo caso di studio si è incentrato sui principali fiumi della regione Emilia-Romagna, caratterizzati da tempi di risposta molto brevi. La scelta di questi fiumi non è stata casuale: negli ultimi anni, infatti, in detti bacini si sono verificati diversi eventi di piena, in gran parte di tipo “flash flood”. Il secondo caso di studio riguarda le sezioni principali del fiume Po, dove il tempo di propagazione dell’onda di piena è maggiore rispetto ai corsi d’acqua del primo caso di studio analizzato. Partendo da una grande quantità di dati, il primo passo è stato selezionare e definire i dati in ingresso in funzione degli obiettivi da raggiungere, per entrambi i casi studio. Per l’elaborazione del modello relativo ai fiumi dell’Emilia-Romagna, sono stati presi in considerazione esclusivamente i dati osservati; a differenza del bacino del fiume Po in cui ai dati osservati sono stati affiancati anche i dati di previsione provenienti dalla catena modellistica Mike11 NAM/HD. Sfruttando una delle principali caratteristiche del metodo Random Forests, è stata stimata una probabilità di accadimento: questo aspetto è fondamentale sia nella fase tecnica che in fase decisionale per qualsiasi attività di intervento di protezione civile. L'elaborazione dei dati e i dati sviluppati sono stati effettuati in ambiente R. Al termine della fase di validazione, gli incoraggianti risultati ottenuti hanno permesso di inserire il modello sviluppato nel primo caso studio all’interno dell’architettura operativa di FEWS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.