22 resultados para model complexity
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
AimBiodiversity outcomes under global change will be influenced by a range of ecological processes, and these processes are increasingly being considered in models of biodiversity change. However, the level of model complexity required to adequately account for important ecological processes often remains unclear. Here we assess how considering realistically complex frugivore-mediated seed dispersal influences the projected climate change outcomes for plant diversity in the Australian Wet Tropics (all 4313 species). LocationThe Australian Wet Tropics, Queensland, Australia. MethodsWe applied a metacommunity model (M-SET) to project biodiversity outcomes using seed dispersal models that varied in complexity, combined with alternative climate change scenarios and habitat restoration scenarios. ResultsWe found that the complexity of the dispersal model had a larger effect on projected biodiversity outcomes than did dramatically different climate change scenarios. Applying a simple dispersal model that ignored spatial, temporal and taxonomic variation due to frugivore-mediated seed dispersal underestimated the reduction in the area of occurrence of plant species under climate change and overestimated the loss of diversity in fragmented tropical forest remnants. The complexity of the dispersal model also changed the habitat restoration approach identified as the best for promoting persistence of biodiversity under climate change. Main conclusionsThe consideration of complex processes such as frugivore-mediated seed dispersal can make an important difference in how we understand and respond to the influence of climate change on biodiversity.
Resumo:
Dominance and subordinate behaviors are important ingredients in the social organizations of group living animals. Behavioral observations on the two eusocial species Ropalidia marginata and Ropalidia cyathiformis suggest varying complexities in their social systems. The queen of R. cyathiformis is an aggressive individual who usually holds the top position in the dominance hierarchy although she does not necessarily show the maximum number of acts of dominance, while the R. marginata queen rarely shows aggression and usually does not hold the top position in the dominance hierarchy of her colony. In R. marginata, more workers are involved in dominance-subordinate interactions as compared to R. cyathiformis. These differences are reflected in the distribution of dominance-subordinate interactions among the hierarchically ranked individuals in both the species. The percentage of dominance interactions decreases gradually with hierarchical ranks in R. marginata while in R. cyathiformis it first increases and then decreases. We use an agent-based model to investigate the underlying mechanism that could give rise to the observed patterns for both the species. The model assumes, besides some non-interacting individuals, the interaction probabilities of the agents depend on their pre-differentiated winning abilities. Our simulations show that if the queen takes up a strategy of being involved in a moderate number of dominance interactions, one could get the pattern similar to R. cyathiformis, while taking up the strategy of very low interactions by the queen could lead to the pattern of R. marginata. We infer that both the species follow a common interaction pattern, while the differences in their social organization are due to the slight changes in queen as well as worker strategies. These changes in strategies are expected to accompany the evolution of more complex societies from simpler ones.
Resumo:
Communication complexity refers to the minimum rate of public communication required for generating a maximal-rate secret key (SK) in the multiterminal source model of Csiszar and Narayan. Tyagi recently characterized this communication complexity for a two-terminal system. We extend the ideas in Tyagi's work to derive a lower bound on communication complexity in the general multiterminal setting. In the important special case of the complete graph pairwise independent network (PIN) model, our bound allows us to determine the exact linear communication complexity, i.e., the communication complexity when the communication and SK are restricted to be linear functions of the randomness available at the terminals.
Resumo:
A new structured model-following adaptive approach is presented in this paper to achieve large attitude maneuvers of rigid bodies. First, a nominal controller is designed using the dynamic inversion philosophy. Next, a neuro- adaptive design is proposed to augment the nominal design in order to assure robust performance in the presence of parameter inaccuracies as well as unknown constant external disturbances. The structured approach proposed in this paper (where kinematic and dynamic equations are handled separately), reduces the complexity of the controller structure. From simulation studies, this adaptive controller is found to be very effective in assuring robust performance.
Resumo:
Using analysis-by-synthesis (AbS) approach, we develop a soft decision based switched vector quantization (VQ) method for high quality and low complexity coding of wideband speech line spectral frequency (LSF) parameters. For each switching region, a low complexity transform domain split VQ (TrSVQ) is designed. The overall rate-distortion (R/D) performance optimality of new switched quantizer is addressed in the Gaussian mixture model (GMM) based parametric framework. In the AbS approach, the reduction of quantization complexity is achieved through the use of nearest neighbor (NN) TrSVQs and splitting the transform domain vector into higher number of subvectors. Compared to the current LSF quantization methods, the new method is shown to provide competitive or better trade-off between R/D performance and complexity.
Resumo:
In this two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In Part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover GNAF protocol enables the use of sphere decoders at the destination with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II, several low decoding complexity code constructions are studied and a lower bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is obtained.
Resumo:
We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.
Resumo:
Non-orthogonal space-time block codes (STBC) with large dimensions are attractive because they can simultaneously achieve both high spectral efficiencies (same spectral efficiency as in V-BLAST for a given number of transmit antennas) as well as full transmit diversity. Decoding of non-orthogonal STBCs with large dimensions has been a challenge. In this paper, we present a reactive tabu search (RTS) based algorithm for decoding non-orthogonal STBCs from cyclic division algebras (CDA) having largedimensions. Under i.i.d fading and perfect channel state information at the receiver (CSIR), our simulation results show that RTS based decoding of 12 X 12 STBC from CDA and 4-QAM with 288 real dimensions achieves i) 10(-3) uncoded BER at an SNR of just 0.5 dB away from SISO AWGN performance, and ii) a coded BER performance close to within about 5 dB of the theoretical MIMO capacity, using rate-3/4 turbo code at a spectral efficiency of 18 bps/Hz. RTS is shown to achieve near SISO AWGN performance with less number of dimensions than with LAS algorithm (which we reported recently) at some extra complexity than LAS. We also report good BER performance of RTS when i.i.d fading and perfect CSIR assumptions are relaxed by considering a spatially correlated MIMO channel model, and by using a training based iterative RTS decoding/channel estimation scheme.
Resumo:
Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error pe(0 <= p(e) < 0.5). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small p(e) should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low p(e) and high p(e) regimes. For the low p(e) regime, convolutional codes with good distance properties show good performance. For the high p(e) regime, convolutional codes that have a good slope ( the minimum normalized cycle weight) are seen to be good. We derive a lower bound on the slope of any rate b/c convolutional code with a certain degree.
Resumo:
Recently, we reported a low-complexity likelihood ascent search (LAS) detection algorithm for large MIMO systems with several tens of antennas that can achieve high spectral efficiencies of the order of tens to hundreds of bps/Hz. Through simulations, we showed that this algorithm achieves increasingly near SISO AWGN performance for increasing number of antennas in Lid. Rayleigh fading. However, no bit error performance analysis of the algorithm was reported. In this paper, we extend our work on this low-complexity large MIMO detector in two directions: i) We report an asymptotic bit error probability analysis of the LAS algorithm in the large system limit, where N-t, N-r -> infinity keeping N-t = N-r, where N-t and N-r are the number of transmit and receive antennas, respectively. Specifically, we prove that the error performance of the LAS detector for V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the maximum-likelihood (ML) detector as N-t, N-r -> infinity keeping N-t = N-r ii) We present simulated BER and nearness to capacity results for V-BLAST as well as high-rate non-orthogonal STBC from Division Algebras (DA), in a more realistic spatially correlated MIMO channel model. Our simulation results show that a) at an uncoded BER of 10(-3), the performance of the LAS detector in decoding 16 x 16 STBC from DA with N-t = = 16 and 16-QAM degrades in spatially correlated fading by about 7 dB compared to that in i.i.d. fading, and 19) with a rate-3/4 outer turbo code and 48 bps/Hz spectral efficiency, the performance degrades by about 6 dB at a coded BER of 10(-4). Our results further show that providing asymmetry in number of antennas such that N-r > N-t keeping the total receiver array length same as that for N-r = N-t, the detector is able to pick up the extra receive diversity thereby significantly improving the BER performance.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.
Reconstructing Solid Model from 2D Scanned Images of Biological Organs for Finite Element Simulation
Resumo:
This work presents a methodology to reconstruct 3D biological organs from image sequences or other scan data using readily available free softwares with the final goal of using the organs (3D solids) for finite element analysis. The methodology deals with issues such as segmentation, conversion to polygonal surface meshes, and finally conversion of these meshes to 3D solids. The user is able to control the detail or the level of complexity of the solid constructed. The methodology is illustrated using 3D reconstruction of a porcine liver as an example. Finally, the reconstructed liver is imported into the commercial software ANSYS, and together with a cyst inside the liver, a nonlinear analysis performed. The results confirm that the methodology can be used for obtaining 3D geometry of biological organs. The results also demonstrate that the geometry obtained by following this methodology can be used for the nonlinear finite element analysis of organs. The methodology (or the procedure) would be of use in surgery planning and surgery simulation since both of these extensively use finite elements for numerical simulations and it is better if these simulations are carried out on patient specific organ geometries. Instead of following the present methodology, it would cost a lot to buy a commercial software which can reconstruct 3D biological organs from scanned image sequences.
Resumo:
We develop a Gaussian mixture model (GMM) based vector quantization (VQ) method for coding wideband speech line spectrum frequency (LSF) parameters at low complexity. The PDF of LSF source vector is modeled using the Gaussian mixture (GM) density with higher number of uncorrelated Gaussian mixtures and an optimum scalar quantizer (SQ) is designed for each Gaussian mixture. The reduction of quantization complexity is achieved using the relevant subset of available optimum SQs. For an input vector, the subset of quantizers is chosen using nearest neighbor criteria. The developed method is compared with the recent VQ methods and shown to provide high quality rate-distortion (R/D) performance at lower complexity. In addition, the developed method also provides the advantages of bitrate scalability and rate-independent complexity.
Resumo:
Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.