976 resultados para least common subgraph algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent class analysis was performed on migraine symptom data collected in a Dutch population sample (N = 12,210, 59% female) in order to obtain empirical groupings of individuals suffering from symptoms of migraine headache. Based on these heritable groupings (h(2) = 0.49, 95% CI: 0.41-0.57) individuals were classified as affected (migrainous headache) or unaffected. Genome-wide linkage analysis was performed using genotype data from 105 families with at least 2 affected siblings. In addition to this primary phenotype, linkage analyses were performed for the individual migraine symptoms. Significance levels, corrected for the analysis of multiple traits, were determined empirically via a novel simulation approach. Suggestive linkage for migrainous headache was found on chromosomes 1 (LOD = 1.63; pointwise P = 0.0031), 13 (LOD = 1.63; P = 0.0031), and 20 (LOD = 1.85; P = 0.0018). Interestingly, the chromosome 1 peak was located close to the ATP1A2 gene, associated with familial hemiplegic migraine type 2 (FHM2). Individual symptom analysis produced a LOD score of 1.97 (P = 0.0013) on chromosome 5 (photo/phonophobia), a LOD score of 2.13 (P = 0.0009) on chromosome 10 (moderate/severe pain intensity) and a near significant LOD score of 3.31 (P = 0.00005) on chromosome 13 (pulsating headache). These peaks were all located near regions previously reported in migraine linkage studies. Our results provide important replication and support for the presence of migraine susceptibility genes within these regions, and further support the utility of an LCA-based phenotyping approach and analysis of individual symptoms in migraine genetic research. Additionally, our novel "2-step" analysis and simulation approach provides a powerful means to investigate linkage to individual trait components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was initiated in response to a scarcity of data on the efficiency, selectivity and discard mortality of baited traps to target Scylla serrata. Five replicates of four traps, including "hoop nets", rigid "wire pots", and collapsible "round" and "rectangular" pots were deployed for 3, 6 and 24 h in two Australian estuaries. Trapped S. serrata were "discarded" into cages and monitored with controls over 3 d. All S. serrata were assessed for damage, while subsets of immediately caught and monitored individuals had haemolymph constituents quantified as stress indices. All traps retained similar-sized (8.119.1 cm carapace width) S. serrata, with catches positively correlated to deployment duration. Round pots were the most efficient for S. serrata and fishmostly Acanthopagrus australis (3 mortality). Hoop nets were the least efficient and were often damaged. No S. serrata died, but 18 were wounded (biased towards hoop nets), typically involving a missing swimmeret. Physiological responses were mild and mostly affected by biological factors. The results validate discarding unwanted S. serrata for controlling exploitation, but larger mesh sizes or escape vents in pots and restrictions on hoop nets would minimise unnecessary catches, pollution and ghost fishing. © 2012 International Council for the Exploration of the Sea. Published by Oxford University Press. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main aims of evolutionary biology is to explain why organisms vary phenotypically as they do. Proximately, this variation arises from genetic differences and from environmental influences, the latter of which is referred to as phenotypic plasticity. Phenotypic plasticity is thus a central concept in evolutionary biology, and understanding its relative importance in causing the phenotypic variation and differentiation is important, for instance in anticipating the consequences of human induced environmental changes. The aim of this thesis was to study geographic variation and local adaptation, as well as sex ratios and environmental sex reversal, in the common frog (Rana temporaria). These themes cover three different aspects of phenotypic plasticity, which emerges as the central concept for the thesis. The first two chapters address geographic variation and local adaptation in two potentially thermally adaptive traits, namely the degree of melanism and the relative leg length. The results show that although there is an increasing latitudinal trend in the degree of melanism in wild populations across Scandinavian Peninsula, this cline has no direct genetic basis and is thus environmentally induced. The second chapter demonstrates that although there is no linear, latitudinally ordered phenotypic trend in relative leg length that would be expected under Allen s rule an ecogeographical rule linking extremity length to climatic conditions there seems to be such a trend at the genetic level, hidden under environmental effects. The first two chapters thus view phenotypic plasticity through its ecological role and evolution, and demonstrate that it can both give rise to phenotypic variation and hide evolutionary patterns in studies that focus solely on phenotypes. The last three chapters relate to phenotypic plasticity through its ecological and evolutionary role in sex determination, and consequent effects on population sex ratio, genetic recombination and the evolution of sex chromosomes. The results show that while sex ratios are strongly female biased and there is evidence of environmental sex reversals, these reversals are unlikely to have caused the sex ratio skew, at least directly. The results demonstrate that environmental sex reversal can have an effect on the evolution of sex chromosomes, as the recombination patterns between them seem to be controlled by phenotypic, rather than genetic, sex. This potentially allows Y chromosomes to recombine, lending support for the recent hypothesis suggesting that sex-reversal may play an important role on the rejuvenation of Y chromosomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Thesis presents a state-space model for a basketball league and a Kalman filter algorithm for the estimation of the state of the league. In the state-space model, each of the basketball teams is associated with a rating that represents its strength compared to the other teams. The ratings are assumed to evolve in time following a stochastic process with independent Gaussian increments. The estimation of the team ratings is based on the observed game scores that are assumed to depend linearly on the true strengths of the teams and independent Gaussian noise. The team ratings are estimated using a recursive Kalman filter algorithm that produces least squares optimal estimates for the team strengths and predictions for the scores of the future games. Additionally, if the Gaussianity assumption holds, the predictions given by the Kalman filter maximize the likelihood of the observed scores. The team ratings allow probabilistic inference about the ranking of the teams and their relative strengths as well as about the teams’ winning probabilities in future games. The predictions about the winners of the games are correct 65-70% of the time. The team ratings explain 16% of the random variation observed in the game scores. Furthermore, the winning probabilities given by the model are concurrent with the observed scores. The state-space model includes four independent parameters that involve the variances of noise terms and the home court advantage observed in the scores. The Thesis presents the estimation of these parameters using the maximum likelihood method as well as using other techniques. The Thesis also gives various example analyses related to the American professional basketball league, i.e., National Basketball Association (NBA), and regular seasons played in year 2005 through 2010. Additionally, the season 2009-2010 is discussed in full detail, including the playoffs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relative geometric arrangements of the sample points, with reference to the structure of the imbedding space, produce clusters. Hence, if each sample point is imagined to acquire a volume of a small M-cube (called pattern-cell), depending on the ranges of its (M) features and number (N) of samples; then overlapping pattern-cells would indicate naturally closer sample-points. A chain or blob of such overlapping cells would mean a cluster and separate clusters would not share a common pattern-cell between them. The conditions and an analytic method to find such an overlap are developed. A simple, intuitive, nonparametric clustering procedure, based on such overlapping pattern-cells is presented. It may be classified as an agglomerative, hierarchical, linkage-type clustering procedure. The algorithm is fast, requires low storage and can identify irregular clusters. Two extensions of the algorithm, to separate overlapping clusters and to estimate the nature of pattern distributions in the sample space, are also indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presented here, in a vector formulation, is an O(mn2) direct concise algorithm that prunes/identifies the linearly dependent (ld) rows of an arbitrary m X n matrix A and computes its reflexive type minimum norm inverse A(mr)-, which will be the true inverse A-1 if A is nonsingular and the Moore-Penrose inverse A+ if A is full row-rank. The algorithm, without any additional computation, produces the projection operator P = (I - A(mr)- A) that provides a means to compute any of the solutions of the consistent linear equation Ax = b since the general solution may be expressed as x = A(mr)+b + Pz, where z is an arbitrary vector. The rank r of A will also be produced in the process. Some of the salient features of this algorithm are that (i) the algorithm is concise, (ii) the minimum norm least squares solution for consistent/inconsistent equations is readily computable when A is full row-rank (else, a minimum norm solution for consistent equations is obtainable), (iii) the algorithm identifies ld rows, if any, and reduces concerned computation and improves accuracy of the result, (iv) error-bounds for the inverse as well as the solution x for Ax = b are readily computable, (v) error-free computation of the inverse, solution vector, rank, and projection operator and its inherent parallel implementation are straightforward, (vi) it is suitable for vector (pipeline) machines, and (vii) the inverse produced by the algorithm can be used to solve under-/overdetermined linear systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The source localization algorithms in the earlier works, mostly used non-planar arrays. If we consider scenarios like human-computer communication, or human-television communication where the microphones need to be placed on the computer monitor or television front panel, i.e we need to use the planar arrays. The algorithm proposed in 1], is a Linear Closed Form source localization algorithm (LCF algorithm) which is based on Time Difference of Arrivals (TDOAs) that are obtained from the data collected using the microphones. It assumes non-planar arrays. The LCF algorithm is applied to planar arrays in the current work. The relationship between the error in the source location estimate and the perturbation in the TDOAs is derived using first order perturbation analysis and validated using simulations. If the TDOAs are erroneous, both the coefficient matrix and the data matrix used for obtaining source location will be perturbed. So, the Total least squares solution for source localization is proposed in the current work. The sensitivity analysis of the source localization algorithm for planar arrays and non-planar arrays is done by introducing perturbation in the TDOAs and the microphone locations. It is shown that the error in the source location estimate is less when we use planar array instead of the particular non-planar array considered for same perturbation in the TDOAs or microphone location. The location of the reference microphone is proved to be important for getting an accurate source location estimate if we are using the LCF algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of engineered landfills for the disposal of industrial wastes is currently a common practice. Bentonite is attracting a greater attention not only as capping and lining materials in landfills but also as buffer and backfill materials for repositories of high-level nuclear waste around the world. In the design of buffer and backfill materials, it is important to know the swelling pressures of compacted bentonite with different electrolyte solutions. The theoretical studies on swell pressure behaviour are all based on Diffuse Double Layer (DDL) theory. To establish a relation between the swell pressure and void ratio of the soil, it is necessary to calculate the mid-plane potential in the diffuse part of the interacting ionic double layers. The difficulty in these calculations is the elliptic integral involved in the relation between half space distance and mid plane potential. Several investigators circumvented this problem using indirect methods or by using cumbersome numerical techniques. In this work, a novel approach is proposed for theoretical estimations of swell pressures of fine-grained soil from the DDL theory. The proposed approach circumvents the complex computations in establishing the relationship between mid-plane potential and diffused plates’ distances in other words, between swell pressure and void ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-complexity near-optimal detection of large-MIMO signals has attracted recent research. Recently, we proposed a local neighborhood search algorithm, namely reactive tabu search (RTS) algorithm, as well as a factor-graph based belief propagation (BP) algorithm for low-complexity large-MIMO detection. The motivation for the present work arises from the following two observations on the above two algorithms: i) Although RTS achieved close to optimal performance for 4-QAM in large dimensions, significant performance improvement was still possible for higher-order QAM (e.g., 16-, 64-QAM). ii) BP also achieved near-optimal performance for large dimensions, but only for {±1} alphabet. In this paper, we improve the large-MIMO detection performance of higher-order QAM signals by using a hybrid algorithm that employs RTS and BP. In particular, motivated by the observation that when a detection error occurs at the RTS output, the least significant bits (LSB) of the symbols are mostly in error, we propose to first reconstruct and cancel the interference due to bits other than LSBs at the RTS output and feed the interference cancelled received signal to the BP algorithm to improve the reliability of the LSBs. The output of the BP is then fed back to RTS for the next iteration. Simulation results show that the proposed algorithm performs better than the RTS algorithm, and semi-definite relaxation (SDR) and Gaussian tree approximation (GTA) algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With ever increasing demand for electric energy, additional generation and associated transmission facilities has to be planned and executed. In order to augment existing transmission facilities, proper planning and selective decisions are to be made whereas keeping in mind the interests of several parties who are directly or indirectly involved. Common trend is to plan optimal generation expansion over the planning period in order to meet the projected demand with minimum cost capacity addition along with a pre-specified reliability margin. Generation expansion at certain locations need new transmission network which involves serious problems such as getting right of way, environmental clearance etc. In this study, an approach to the citing of additional generation facilities in a given system with minimum or no expansion in the transmission facility is attempted using the network connectivity and the concept of electrical distance for projected load demand. The proposed approach is suitable for large interconnected systems with multiple utilities. Sample illustration on real life system is presented in order to show how this approach improves the overall performance on the operation of the system with specified performance parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a physics-based closed form small signal Nonquasi-static (NQS) model for a long channel Common Double Gate MOSFET (CDG) by taking into account the asymmetry that may prevail between the gate oxide thickness. We use the unique quasi-linear relationship between the surface potentials along the channel to solve the governing continuity equation (CE) in order to develop the analytical expressions for the Y parameters. The Bessel function based solution of the CE is simplified in form of polynomials so that it could be easily implemented in any circuit simulator. The model shows good agreement with the TCAD simulation at-least till 4 times of the cut-off frequency for different device geometries and bias conditions.