991 resultados para Adjacency Matrix
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
In this paper, new topological indices, EA Sigma and EAmax, are introduced. They are based on the extended adjacency matrices of molecules, in which the influences of factors of heteroatoms and multiple bonds were considered. The results show that EA Sigm
Resumo:
This work presents a theoretical-graph method of determining the fault tolerance degree of the computer network interconnections and nodes. Experimental results received from simulations of this method over a distributed computing network environment are also presented.
Resumo:
Background: Achieving health equity has been identified as a major challenge, both internationally and within Australia. Inequalities in cancer outcomes are well documented, and must be quantified before they can be addressed. One method of portraying geographical variation in data uses maps. Recently we have produced thematic maps showing the geographical variation in cancer incidence and survival across Queensland, Australia. This article documents the decisions and rationale used in producing these maps, with the aim to assist others in producing chronic disease atlases. Methods: Bayesian hierarchical models were used to produce the estimates. Justification for the cancers chosen, geographical areas used, modelling method, outcome measures mapped, production of the adjacency matrix, assessment of convergence, sensitivity analyses performed and determination of significant geographical variation is provided. Conclusions: Although careful consideration of many issues is required, chronic disease atlases are a useful tool for assessing and quantifying geographical inequalities. In addition they help focus research efforts to investigate why the observed inequalities exist, which in turn inform advocacy, policy, support and education programs designed to reduce these inequalities.
Resumo:
Identifying product families has been considered as an effective way to accommodate the increasing product varieties across the diverse market niches. In this paper, we propose a novel framework to identifying product families by using a similarity measure for a common product design data BOM (Bill of Materials) based on data mining techniques such as frequent mining and clus-tering. For calculating the similarity between BOMs, a novel Extended Augmented Adjacency Matrix (EAAM) representation is introduced that consists of information not only of the content and topology but also of the fre-quent structural dependency among the various parts of a product design. These EAAM representations of BOMs are compared to calculate the similarity between products and used as a clustering input to group the product fami-lies. When applied on a real-life manufacturing data, the proposed framework outperforms a current baseline that uses orthogonal Procrustes for grouping product families.
Resumo:
The objective of this work is to develop a systematic methodology for describing hand postures and grasps which is independent of the kinematics and geometry of the hand model which in turn can be used for developing a universal referencing scheme. It is therefore necessary that the scheme be general enough to describe the continuum of hand poses. Indian traditional classical dance form, “Bharathanatyam”, uses 28 single handed gestures, called “mudras”. A Mudra can be perceived as a hand posture with a specific pattern of finger configurations. Using modifiers, complex mudras could be constructed from relatively simple mudras. An adjacency matrix is constructed to describe the relationship among mudras. Various mudra transitions can be obtained from the graph associated with this matrix. Using this matrix, a hierarchy of the mudras is formed. A set of base mudras and modifiers are used for describing how one simple posture of hand can be transformed into another relatively complex one. A canonical set of predefined hand postures and modifiers can be used in digital human modeling to develop standard hand posture libraries.
Resumo:
Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define `synconset wave' as a cascade of first spikes within a synchronisation event. Synconset waves would occur in `synconset chains', which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony.
Resumo:
An axis-parallel b-dimensional box is a Cartesian product R-1 x R-2 x ... x R-b where R-i is a closed interval of the form a(i),b(i)] on the real line. For a graph G, its boxicity box(G) is the minimum dimension b, such that G is representable as the intersection graph of boxes in b-dimensional space. Although boxicity was introduced in 1969 and studied extensively, there are no significant results on lower bounds for boxicity. In this paper, we develop two general methods for deriving lower bounds. Applying these methods we give several results, some of which are listed below: 1. The boxicity of a graph on n vertices with no universal vertices and minimum degree delta is at least n/2(n-delta-1). 2. Consider the g(n,p) model of random graphs. Let p <= 1 - 40logn/n(2.) Then with high `` probability, box(G) = Omega(np(1 - p)). On setting p = 1/2 we immediately infer that almost all graphs have boxicity Omega(n). Another consequence of this result is as follows: For any positive constant c < 1, almost all graphs on n vertices and m <= c((n)(2)) edges have boxicity Omega(m/n). 3. Let G be a connected k-regular graph on n vertices. Let lambda be the second largest eigenvalue in absolute value of the adjacency matrix of G. Then, the boxicity of G is a least (kappa(2)/lambda(2)/log(1+kappa(2)/lambda(2))) (n-kappa-1/2n). 4. For any positive constant c 1, almost all balanced bipartite graphs on 2n vertices and m <= cn(2) edges have boxicity Omega(m/n).
Resumo:
This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.
In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.
We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.
As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.
Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum.
Resumo:
A new molecular topological index, EATI is proposed based on the extended adjacency matrix, It has a hitherto unheard of power of discrimination and can be applied to discriminate various kinds of organic compounds, such as structures containing heteroatoms, etc.. Thus EATI might be possibly used as supplementary reference for CAS Registry Numbers for structure documentation.
Resumo:
A highly discriminating molecular topological index, EAID, is proposed based on the extended adjacency matrix. A systematic search for degeneracy was performed for 3 807 434 alkane trees, 202 558 complex cyclic or polycyclic graphs, and 430 472 structures containing heteroatoms. No counterexamples (two or more nonisomorphic structures with the same EAID number) were found. This is a hitherto unheard of power of discrimination. Thus EAID might be possibly used as supplementary reference for CAS Registry Numbers for structure documentation.
Resumo:
An expert system for the elucidation of the structures of organic compounds-ESESOC-II has been designed. It is composed of three parts: spectroscopic data analysis, structure generator, and evaluation of the candidate structures. The heart of ESESOC is the structure generator, as an integral part, which accepts the specific types of information, e.g. molecular formulae, substructure constraints, and produces an exhaustive and irredundant list of candidate structures. The scheme for the structural generation is given, in which the depth-first search strategy is used to fill the bonding adjacency matrix (BAM) and a new method is introduced to remove the duplicates.
Resumo:
Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.
Resumo:
Critical nodes—or “middlemen”—have an essential place in both social and economic networks when considering the flow of information and trade. This paper extends the concept of critical nodes to directed networks and in doing so identify strong and weak middlemen.
Node contestability is introduced as a form of competition in networks; a duality between uncontested intermediaries and middlemen is established. The brokerage power of middlemen is formally expressed and a general algorithm is constructed to measure the brokerage power of each node from the networks adjacency matrix. Augmentations of the brokerage power measure are discussed to encapsulate relevant centrality measures. Furthermore, we extend these notions and provide measures of the competitiveness of a network.
We use these concepts to identify and measure middlemen in two empirical socio-economic networks, the elite marriage network of Renaissance Florence and Krackhardt’s advice net- work.
Resumo:
The energy of a graph is equal to the sum of the absolute values of its eigenvalues. The energy of a matrix is equal to the sum of its singular values. We establish relations between the energy of the line graph of a graph G and the energies associated with the Laplacian and signless Laplacian matrices of G. © 2010 Elsevier B.V. All rights reserved.