849 resultados para RANDOM PERMUTATION MODEL
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Even though a large amount of evidence would suggest that PP2A serine/threonine protein phosphatase acts as a tumour suppressor the genomics data to support this claim is limited. We fit a sparse binary Markov random field with individual sample's total mutational frequency as an additional covariate to model the dependencies between the mutations occurring in the PP2A encoding genes. We utilize the data from recent large scale cancer genomics studies, where the whole genome from a human tumour biopsy has been analysed. Our results show a complex network of interactions between the occurrence of mutations in our twenty examined genes. According to our analysis the mutations occurring in the genes PPP2R1A, PPP2R3A, and PPP2R2B are identified as the key mutations. These genes form the core of the network of conditional dependency between the mutations in the investigated twenty genes. Additionally, we note that the mutations occurring in PPP2R4 seem to be more influential in samples with higher number of total mutations. The mutations occurring in the set of genes suggested by our results has been shown to contribute to the transformation of human cells. We conclude that our evidence further supports the claim that PP2A acts as a tumour suppressor and restoring PP2A activity is an appealing therapeutic strategy.
Resumo:
This paper provides an agent-based software exploration of the wellknown free market efficiency/equality trade-off. Our study simulates the interaction of agents producing, trading and consuming goods in the presence of different market structures, and looks at how efficient the producers/consumers mapping turn out to be as well as the resulting distribution of welfare among agents at the end of an arbitrarily large number of iterations. Two market mechanisms are compared: the competitive market (a double auction market in which agents outbid each other in order to buy and sell products) and the random one (in which products are allocated randomly). Our results confirm that the superior efficiency of the competitive market (an effective and never stopping producers/consumers mapping and a superior aggregative welfare) comes at a very high price in terms of inequality (above all when severe budget constraints are in play).
Resumo:
The recently reported Monte Carlo Random Path Sampling method (RPS) is here improved and its application is expanded to the study of the 2D and 3D Ising and discrete Heisenberg models. The methodology was implemented to allow use in both CPU-based high-performance computing infrastructures (C/MPI) and GPU-based (CUDA) parallel computation, with significant computational performance gains. Convergence is discussed, both in terms of free energy and magnetization dependence on field/temperature. From the calculated magnetization-energy joint density of states, fast calculations of field and temperature dependent thermodynamic properties are performed, including the effects of anisotropy on coercivity, and the magnetocaloric effect. The emergence of first-order magneto-volume transitions in the compressible Ising model is interpreted using the Landau theory of phase transitions. Using metallic Gadolinium as a real-world example, the possibility of using RPS as a tool for computational magnetic materials design is discussed. Experimental magnetic and structural properties of a Gadolinium single crystal are compared to RPS-based calculations using microscopic parameters obtained from Density Functional Theory.
Resumo:
We study spatially localized states of a spiking neuronal network populated by a pulse coupled phase oscillator known as the lighthouse model. We show that in the limit of slow synaptic interactions in the continuum limit the dynamics reduce to those of the standard Amari model. For non-slow synaptic connections we are able to go beyond the standard firing rate analysis of localized solutions allowing us to explicitly construct a family of co-existing one-bump solutions, and then track bump width and firing pattern as a function of system parameters. We also present an analysis of the model on a discrete lattice. We show that multiple width bump states can co-exist and uncover a mechanism for bump wandering linked to the speed of synaptic processing. Moreover, beyond a wandering transition point we show that the bump undergoes an effective random walk with a diffusion coefficient that scales exponentially with the rate of synaptic processing and linearly with the lattice spacing.
Resumo:
Graphs are powerful tools to describe social, technological and biological networks, with nodes representing agents (people, websites, gene, etc.) and edges (or links) representing relations (or interactions) between agents. Examples of real-world networks include social networks, the World Wide Web, collaboration networks, protein networks, etc. Researchers often model these networks as random graphs. In this dissertation, we study a recently introduced social network model, named the Multiplicative Attribute Graph model (MAG), which takes into account the randomness of nodal attributes in the process of link formation (i.e., the probability of a link existing between two nodes depends on their attributes). Kim and Lesckovec, who defined the model, have claimed that this model exhibit some of the properties a real world social network is expected to have. Focusing on a homogeneous version of this model, we investigate the existence of zero-one laws for graph properties, e.g., the absence of isolated nodes, graph connectivity and the emergence of triangles. We obtain conditions on the parameters of the model, so that these properties occur with high or vanishingly probability as the number of nodes becomes unboundedly large. In that regime, we also investigate the property of triadic closure and the nodal degree distribution.
Resumo:
US suburbs have often been characterized by their relatively low walk accessibility compared to more urban environments, and US urban environments have been characterized by low walk accessibility compared to cities in other countries. Lower overall density in the suburbs implies that activities, if spread out, would have a greater distance between them. But why should activities be spread out instead of developed contiguously? This brief research note builds a positive model for the emergence of contiguous development along “Main Street” to illustrate the trade-offs that result in the built environment we observe. It then suggests some policy interventions to place a “thumb on the scale” to choose which parcels will develop in which sequence to achieve socially preferred outcomes.
Resumo:
Database schemas, in many organizations, are considered one of the critical assets to be protected. From database schemas, it is not only possible to infer the information being collected but also the way organizations manage their businesses and/or activities. One of the ways to disclose database schemas is through the Create, Read, Update and Delete (CRUD) expressions. In fact, their use can follow strict security rules or be unregulated by malicious users. In the first case, users are required to master database schemas. This can be critical when applications that access the database directly, which we call database interface applications (DIA), are developed by third party organizations via outsourcing. In the second case, users can disclose partially or totally database schemas following malicious algorithms based on CRUD expressions. To overcome this vulnerability, we propose a new technique where CRUD expressions cannot be directly manipulated by DIAs any more. Whenever a DIA starts-up, the associated database server generates a random codified token for each CRUD expression and sends it to the DIA that the database servers can use to execute the correspondent CRUD expression. In order to validate our proposal, we present a conceptual architectural model and a proof of concept.
Resumo:
This paper reports on continuing research into the modelling of an order picking process within a Crossdocking distribution centre using Simulation Optimisation. The aim of this project is to optimise a discrete event simulation model and to understand factors that affect finding its optimal performance. Our initial investigation revealed that the precision of the selected simulation output performance measure and the number of replications required for the evaluation of the optimisation objective function through simulation influences the ability of the optimisation technique. We experimented with Common Random Numbers, in order to improve the precision of our simulation output performance measure, and intended to use the number of replications utilised for this purpose as the initial number of replications for the optimisation of our Crossdocking distribution centre simulation model. Our results demonstrate that we can improve the precision of our selected simulation output performance measure value using Common Random Numbers at various levels of replications. Furthermore, after optimising our Crossdocking distribution centre simulation model, we are able to achieve optimal performance using fewer simulations runs for the simulation model which uses Common Random Numbers as compared to the simulation model which does not use Common Random Numbers.
Resumo:
We consider a spectrally-negative Markov additive process as a model of a risk process in a random environment. Following recent interest in alternative ruin concepts, we assume that ruin occurs when an independent Poissonian observer sees the process as negative, where the observation rate may depend on the state of the environment. Using an approximation argument and spectral theory, we establish an explicit formula for the resulting survival probabilities in this general setting. We also discuss an efficient evaluation of the involved quantities and provide a numerical illustration.
Resumo:
Assessing the fit of a model is an important final step in any statistical analysis, but this is not straightforward when complex discrete response models are used. Cross validation and posterior predictions have been suggested as methods to aid model criticism. In this paper a comparison is made between four methods of model predictive assessment in the context of a three level logistic regression model for clinical mastitis in dairy cattle; cross validation, a prediction using the full posterior predictive distribution and two “mixed” predictive methods that incorporate higher level random effects simulated from the underlying model distribution. Cross validation is considered a gold standard method but is computationally intensive and thus a comparison is made between posterior predictive assessments and cross validation. The analyses revealed that mixed prediction methods produced results close to cross validation whilst the full posterior predictive assessment gave predictions that were over-optimistic (closer to the observed disease rates) compared with cross validation. A mixed prediction method that simulated random effects from both higher levels was best at identifying the outlying level two (farm-year) units of interest. It is concluded that this mixed prediction method, simulating random effects from both higher levels, is straightforward and may be of value in model criticism of multilevel logistic regression, a technique commonly used for animal health data with a hierarchical structure.
Resumo:
This paper is concerned with a stochastic SIR (susceptible-infective-removed) model for the spread of an epidemic amongst a population of individuals, with a random network of social contacts, that is also partitioned into households. The behaviour of the model as the population size tends to infinity in an appropriate fashion is investigated. A threshold parameter which determines whether or not an epidemic with few initial infectives can become established and lead to a major outbreak is obtained, as are the probability that a major outbreak occurs and the expected proportion of the population that are ultimately infected by such an outbreak, together with methods for calculating these quantities. Monte Carlo simulations demonstrate that these asymptotic quantities accurately reflect the behaviour of finite populations, even for only moderately sized finite populations. The model is compared and contrasted with related models previously studied in the literature. The effects of the amount of clustering present in the overall population structure and the infectious period distribution on the outcomes of the model are also explored.
Resumo:
This paper considers a stochastic SIR (susceptible-infective-removed) epidemic model in which individuals may make infectious contacts in two ways, both within 'households' (which for ease of exposition are assumed to have equal size) and along the edges of a random graph describing additional social contacts. Heuristically-motivated branching process approximations are described, which lead to a threshold parameter for the model and methods for calculating the probability of a major outbreak, given few initial infectives, and the expected proportion of the population who are ultimately infected by such a major outbreak. These approximate results are shown to be exact as the number of households tends to infinity by proving associated limit theorems. Moreover, simulation studies indicate that these asymptotic results provide good approximations for modestly-sized finite populations. The extension to unequal sized households is discussed briefly.
Resumo:
By Monte Carlo simulations, we study the character of the spinglass (SG) phase in dense disordered packings of magnetic nanoparticles (NPs). We focus on NPs which have large uniaxial anisotropies and can be well represented as Ising dipoles. Dipoles are placed on SC lattices and point along randomly oriented axes. From the behaviour of a SG correlation length we determine the transition temperature Tc between the paramagnetic and a SG phase. For temperatures well below Tc we find distributions of the SG overlap parameter q that are strongly sample-dependent and exhibit several spikes. We find that the average width of spikes, and the fraction of samples with spikes higher than a certain threshold does not vary appreciably with the system sizes studied. We compare these results with the ones found previously for 3D site-diluted systems of parallel Ising dipoles and with the behaviour of the Sherrington-Kirkpatrick model.