940 resultados para Computational complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computations have been carried out for simulating supersonic flow through a set of converging-diverging nozzles with their expanding jets forming a laser cavity and flow patterns through diffusers, past the cavity. A thorough numerical investigation with 3-D RANS code is carried out to capture the flow distribution which comprises of shock patterns and multiple supersonic jet interactions. The analysis of pressure recovery characteristics during the flow through the diffusers is an important parameter of the simulation and is critical for the performance of the laser device. The results of the computation have shown a close agreement with the experimentally measured parameters as well as other established results indicating that the flow analysis done is found to be satisfactory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precoding for multiple-input multiple-output (MIMO) antenna systems is considered with perfect channel knowledge available at both the transmitter and the receiver. For two transmit antennas and QAM constellations, a real-valued precoder which is approximately optimal (with respect to the minimum Euclidean distance between points in the received signal space) among real-valued precoders based on the singular value decomposition (SVD) of the channel is proposed. The proposed precoder is obtainable easily for arbitrary QAM constellations, unlike the known complex-valued optimal precoder by Collin et al. for two transmit antennas which is in existence for 4-QAM alone and is extremely hard to obtain for larger QAM constellations. The proposed precoding scheme is extended to higher number of transmit antennas on the lines of the E - d(min) precoder for 4-QAM by Vrigneau et al. which is an extension of the complex-valued optimal precoder for 4-QAM. The proposed precoder's ML-decoding complexity as a function of the constellation size M is only O(root M)while that of the E - d(min) precoder is O(M root M)(M = 4). Compared to the recently proposed X- and Y-precoders, the error performance of the proposed precoder is significantly better while being only marginally worse than that of the E - d(min) precoder for 4-QAM. It is argued that the proposed precoder provides full-diversity for QAM constellations and this is supported by simulation plots of the word error probability for 2 x 2, 4 x 4 and 8 x 8 systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Generalized Distributive Law (GDL) is a message passing algorithm which can efficiently solve a certain class of computational problems, and includes as special cases the Viterbi's algorithm, the BCJR algorithm, the Fast-Fourier Transform, Turbo and LDPC decoding algorithms. In this paper GDL based maximum-likelihood (ML) decoding of Space-Time Block Codes (STBCs) is introduced and a sufficient condition for an STBC to admit low GDL decoding complexity is given. Fast-decoding and multigroup decoding are the two algorithms used in the literature to ML decode STBCs with low complexity. An algorithm which exploits the advantages of both these two is called Conditional ML (CML) decoding. It is shown in this paper that the GDL decoding complexity of any STBC is upper bounded by its CML decoding complexity, and that there exist codes for which the GDL complexity is strictly less than the CML complexity. Explicit examples of two such families of STBCs is given in this paper. Thus the CML is in general suboptimal in reducing the ML decoding complexity of a code, and one should design codes with low GDL complexity rather than low CML complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we deal with low-complexity near-optimal detection/equalization in large-dimension multiple-input multiple-output inter-symbol interference (MIMO-ISI) channels using message passing on graphical models. A key contribution in the paper is the demonstration that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, although the graphical models that represent MIMO-ISI channels are fully/densely connected (loopy graphs). These include 1) use of Markov random field (MRF)-based graphical model with pairwise interaction, in conjunction with message damping, and 2) use of factor graph (FG)-based graphical model with Gaussian approximation of interference (GAI). The per-symbol complexities are O(K(2)n(t)(2)) and O(Kn(t)) for the MRF and the FG with GAI approaches, respectively, where K and n(t) denote the number of channel uses per frame, and number of transmit antennas, respectively. These low-complexities are quite attractive for large dimensions, i.e., for large Kn(t). From a performance perspective, these algorithms are even more interesting in large-dimensions since they achieve increasingly closer to optimum detection performance for increasing Kn(t). Also, we show that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of M-QAM symbol detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we give a new framework for constructing low ML decoding complexity space-time block codes (STBCs) using codes over the Klein group K. Almost all known low ML decoding complexity STBCs can be obtained via this approach. New full- diversity STBCs with low ML decoding complexity and cubic shaping property are constructed, via codes over K, for number of transmit antennas N = 2(m), m >= 1, and rates R > 1 complex symbols per channel use. When R = N, the new STBCs are information- lossless as well. The new class of STBCs have the least knownML decoding complexity among all the codes available in the literature for a large set of (N, R) pairs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work reported here is concerned with a detailed thermochemical evaluation of the flaming mode behaviour of a gasifier based stove. Determination of the gas composition over the fuel bed, surface and gas temperatures in the gasification process constitute principal experimental features. A simple atomic balance for the gasification reaction combined with the gas composition from the experiments is used to determine the CH(4) equivalent of higher hydrocarbons and the gasification efficiency (eta g). The components of utilization efficiency, namely, gasification-combustion and heat transfer are explored. Reactive flow computational studies using the measured gas composition over the fuel bed are used to simulate the thermochemical flow field and heat transfer to the vessel; hither-to-ignored vessel size effects in the extraction of heat from the stove are established clearly. The overall flaming mode efficiency of the stove is 50-54%; the convective and radiative components of heat transfer are established to be 45-47 and 5-7% respectively. The efficiency estimates from reacting computational fluid dynamics (RCFD) compare well with experiments. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current scientific research is characterized by increasing specialization, accumulating knowledge at a high speed due to parallel advances in a multitude of sub-disciplines. Recent estimates suggest that human knowledge doubles every two to three years – and with the advances in information and communication technologies, this wide body of scientific knowledge is available to anyone, anywhere, anytime. This may also be referred to as ambient intelligence – an environment characterized by plentiful and available knowledge. The bottleneck in utilizing this knowledge for specific applications is not accessing but assimilating the information and transforming it to suit the needs for a specific application. The increasingly specialized areas of scientific research often have the common goal of converting data into insight allowing the identification of solutions to scientific problems. Due to this common goal, there are strong parallels between different areas of applications that can be exploited and used to cross-fertilize different disciplines. For example, the same fundamental statistical methods are used extensively in speech and language processing, in materials science applications, in visual processing and in biomedicine. Each sub-discipline has found its own specialized methodologies making these statistical methods successful to the given application. The unification of specialized areas is possible because many different problems can share strong analogies, making the theories developed for one problem applicable to other areas of research. It is the goal of this paper to demonstrate the utility of merging two disparate areas of applications to advance scientific research. The merging process requires cross-disciplinary collaboration to allow maximal exploitation of advances in one sub-discipline for that of another. We will demonstrate this general concept with the specific example of merging language technologies and computational biology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we have developed methods to compute maps from differential equations. We take two examples. First is the case of the harmonic oscillator and the second is the case of Duffing's equation. First we convert these equations to a canonical form. This is slightly nontrivial for the Duffing's equation. Then we show a method to extend these differential equations. In the second case, symbolic algebra needs to be used. Once the extensions are accomplished, various maps are generated. The Poincare sections are seen as a special case of such generated maps. Other applications are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the diversity-multiplexing gain tradeoff (DMT) of single-source, single-sink (ss-ss), multihop relay networks having slow-fading links is studied. In particular, the two end-points of the DMT of ss-ss full-duplex networks are determined, by showing that the maximum achievable diversity gain is equal to the min-cut and that the maximum multiplexing gain is equal to the min-cut rank, the latter by using an operational connection to a deterministic network. Also included in the paper, are several results that aid in the computation of the DMT of networks operating under amplify-and-forward (AF) protocols. In particular, it is shown that the colored noise encountered in amplify-and-forward protocols can be treated as white for the purpose of DMT computation, lower bounds on the DMT of lower-triangular channel matrices are derived and the DMT of parallel MIMO channels is computed. All protocols appearing in the paper are explicit and rely only upon AF relaying. Half-duplex networks and explicit coding schemes are studied in a companion paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have postulated a novel pathway that could assist in the nucleation of soot particles through covalent dimerization and oligomerizations of a variety of PAHs. DFT calculations were performed with the objective of obtaining the relative thermal stabilities and formation probabilities of oligomeric species that exploit the facile dimerization that is known to occur in linear oligoacenes. We propose that the presence of small stretches of linear oligoacence (tetracene or longer) in extended PAH, either embedded or tethered, would be adequate for enabling the formation of such dimeric and oligomeric adducts; these could then serve as nuclei for the growth of soot particles. Our studies also reveal the importance of p-stacking interactions between extended aromatic frameworks in governing the relative stabilities of the oligomeric species that are formed. (c) 2012 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the days of Digital Subscriber Links (DSL), time domain equalizers (TEQ's) have been used to combat time dispersive channels in Multicarrier Systems. In this paper, we propose computationally inexpensive techniques to recompute TEQ weights in the presence of changes in the channel, especially over fast fading channels. The techniques use no extra information except the perturbation to the channel itself, and provide excellent approximations to the new TEQ weights. Adaptation methods for two existing Channel shortening algorithms are proposed and their performance over randomly varying, randomly perturbed channels is studied. The proposed adaptation techniques are shown to perform admirably well for small changes in channels for OFDM systems. (C) 2012 Elsevier GmbH. All rights reserved.