976 resultados para Inside-Outside Algorithm
Resumo:
We present a low-complexity algorithm based on reactive tabu search (RTS) for near maximum likelihood (ML) detection in large-MIMO systems. The conventional RTS algorithm achieves near-ML performance for 4-QAM in large-MIMO systems. But its performance for higher-order QAM is far from ML performance. Here, we propose a random-restart RTS (R3TS) algorithm which achieves significantly better bit error rate (BER) performance compared to that of the conventional RTS algorithm in higher-order QAM. The key idea is to run multiple tabu searches, each search starting with a random initial vector and choosing the best among the resulting solution vectors. A criterion to limit the number of searches is also proposed. Computer simulations show that the R3TS algorithm achieves almost the ML performance in 16 x 16 V-BLAST MIMO system with 16-QAM and 64-QAM at significantly less complexities than the sphere decoder. Also, in a 32 x 32 V-BLAST MIMO system, the R3TS performs close to ML lower bound within 1.6 dB for 16-QAM (128 bps/Hz), and within 2.4 dB for 64-QAM (192 bps/Hz) at 10(-3) BER.
Resumo:
If the solar dynamo operates in a thin layer of 10,000-km thickness at the interface between the convection zone and the radiative core, using the facts that the dynamo should have a period of 22 years and a half-wavelength of 40 deg in the theta-direction, it is possible to impose restrictions on the values which various dynamo parameters are allowed to have. It is pointed out that the dynamo should be of alpha-sq omega nature, and kinematical calculations are presented for free dynamo waves and for dynamos in thin rectangular slabs with appropriate boundary conditions. An alpha-sq omega dynamo is expected to produce a significant poloidal field which does not leak to the solar surface. It is found that the turbulent diffusity eta and alpha-coefficient are restricted to values within about a factor of 10, the median values being eta of about 10 to the 10th sq cm/sec and alpha of about 10 cm/sec. On the basis of mixing length theory, it is pointed out that such values imply a reasonable turbulent velocity of the order 30 m/s, but rather small turbulent length scales like 300 km.
Resumo:
A new fast and efficient marching algorithm is introduced to solve the basic quasilinear, hyperbolic partial differential equations describing unsteady, flow in conduits by the method of characteristics. The details of the marching method are presented with an illustration of the waterhammer problem in a simple piping system both for friction and frictionless cases. It is shown that for the same accuracy the new marching method requires fewer computational steps, less computer memory and time.
Resumo:
A simple and efficient algorithm for the bandwidth reduction of sparse symmetric matrices is proposed. It involves column-row permutations and is well-suited to map onto the linear array topology of the SIMD architectures. The efficiency of the algorithm is compared with the other existing algorithms. The interconnectivity and the memory requirement of the linear array are discussed and the complexity of its layout area is derived. The parallel version of the algorithm mapped onto the linear array is then introduced and is explained with the help of an example. The optimality of the parallel algorithm is proved by deriving the time complexities of the algorithm on a single processor and the linear array.
Resumo:
For the specific case of binary stars, this paper presents signal-to-noise ratio (SNR) calculations for the detection of the parity (the side of the brighter component) of the binary using the double correlation method. This double correlation method is a focal plane version of the well-known Knox-Thompson method used in speckle interferometry. It is shown that SNR for parity detection using double correlation depends linearly on binary separation. This new result was entirely missed by previous analytical calculations dealing with a point source. It is concluded that, for magnitudes relevant to the present day speckle interferometry and for binary separations close to the diffraction limit, speckle masking has better SNR for parity detection.
Resumo:
The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.
Resumo:
In this paper we develop a multithreaded VLSI processor linear array architecture to render complex environments based on the radiosity approach. The processing elements are identical and multithreaded. They work in Single Program Multiple Data (SPMD) mode. A new algorithm to do the radiosity computations based on the progressive refinement approach[2] is proposed. Simulation results indicate that the architecture is latency tolerant and scalable. It is shown that a linear array of 128 uni-threaded processing elements sustains a throughput close to 0.4 million patches/sec.
Resumo:
The object of this research is to study the mineralogy of the diabase dykes in Suomussalmi and the relevance of the mineralogy to tectonic events, specifically large block movements in the Archaean crust. Sharp tectonic lines separate two anomalies in the dyke swarms, shown on a geomagnetic map as positive anomalies. In one of these areas, the Toravaara anomaly, the diabases seem to contain pyroxenes as a main component. Outside the Toravaara anomaly hornblende is the main ferromagnesian mineral in diabases. The aim of this paper is to research the differences in the diabases inside and outside the anomalies and interpret the processes that formed the anomalies. The data for this sudy consist of field observations, 120 thin sections, 334 electron microprobe analyses, 19 whole-rock chemical analyses, a U-Pb age analysis and geomagnetic low-altitude aerial survey maps. The methods are interpretation of field observations, chemical analyses, microprobe analyses of single minerals and radiometric age determination, microscopic studies of the thin sections, geothermometers and geobarometers. On the basis of field observations and petrographic studies the diabases in the area are divided into pyroxene diabases, hornblende diabases and the Lohisärkkä porphyritic dyke swarm. Hornblende diabases are found in the entire study area, while the pyroxene diabases concentrate on the area of the Toravaara geomagnetic anomaly. The Lohisärkkä swarm transects the whole area as a thin line from east to west. The diabases are fairly homogenous both chemically and by mineral composition. The few exceptions are part of rarer older swarms or are significantly altered. The Lohisärkkä dyke swarm was dated as 2,21 Ga old, significantly older than the most common 1,98 Ga swarm in the area. The geothermometers applied showed that the diabases on the Toravaara anomaly were stabilized at a much higher temperature than the dykes outside the anomaly. The geobarometers showed the pyroxenes to have crystallized at varying depths. The research showed the Toravaara anomaly to have formed by a vertical block movement, and the fault on its west side to have a total lateral transfer of only a few kilometers. The formation of the second anomaly was also interpreted to be tectonic in nature. In addition, the results of the geothermobarometry uncovered necessary conditions for the study of diabase emplacement depth: the minerals for the study must be chosen by minimum crystallization depth, and a geobarometer capable of determining the magmatic temperature must be used. In addition, it would be more suitable to conduct this kind of study in an area where the dykes are more exposed.
Resumo:
Vanhat ja ontot puut ovat tärkeä elinympäristö monelle lahopuusta riippuvaiselle eliölajille. Onttoihin puihin on erikoistunut suuri määrä myös vaarantuneita ja harvinaisia hyönteislajeja, jotka elävät puun onkalon seinämillä tai onkalon pohjalle kerääntyvässä orgaanisessa aineksessa, ns. mulmissa. Tutkimuksen tavoitteena oli selvittää, mikä kolmesta pyydystyypistä (ikkuna-, vuoka- ja kuoppapyydys) soveltuu parhaiten onttojen puiden lahopuukovakuoriaisten pyyntiin. Lisäksi tavoitteena oli kartoittaa hyönteisnäytteiden ensimmäiseen laboratoriokäsittelyyn vaadittua aikaa. Tutkimuksessa oli mukana vanhoja rungostaan onttoutuneita lehmuksia, tammia ja vaahteroita pääkaupunkiseudun puisto- ja kartanoalueilta. Puiden onkaloiden sisään aseteltiin ikkuna-, vuoka- ja kuoppapyydyksiä, kaksi kutakin tyyppiä ja ne tyhjennettiin kolmen viikon välein touko-heinäkuussa 2006. Pyydyksiä oli siis yhteensä 90 per pyyntijakso. Kun näytteistä eroteltiin halutut hyönteislahkot (mukaanlukien kovakuoriaiset) niiden käsittelyyn käytetty aika kirjattiin ylös. Aineistosta tunnistettiin yhteensä 3825 kovakuoriaisyksilöä ja 212 lajia, joista lahopuusta riippuvaisia oli yhteensä 3398 yksilöä ja 121 lajia. Ikkunapyydyksissä esiintyi yhteensä 1639 yksilöä ja 140 lajia, vuokapyydyksissä 1506 yksilöä ja 134 lajia, kuoppapyydyksissä 680 yksilöä ja 111 lajia. Näytteiden käsittelyaikojen keskiarvot olivat 48,3 minuuttia ikkunapyydykselle, 65,5 minuuttia vuokapyydykselle ja 34,1 kuoppapyydykselle. Lajistokoostumuksen huomioiva ?-diversiteetti erosi huomattavasti pyydysten välillä, se oli 36,5 % ikkuna- ja vuokapyydysten välillä, 13,1 % ikkuna- ja kuoppapyydysten välillä ja 14,2 % vuoka- ja kuoppapyydysten välillä. Ikkuna- ja vuokapyydysten välillä ei havaittu tilastollisesti merkitsevää eroa saproksyylilajien (p<0,05), -yksilöiden (p<0,05) tai käsittelyaikojen (p<0,05) keskiarvoissa. Ikkuna- ja vuokapyydyksillä saatiin keskimäärin selvästi enemmän saproksyylilajeja ja –yksilöitä kuoppapyydykseen verrattuna. Kuoppapyydyksellä saatiin kokonaisyksilömäärään verrattuna suhteellisesti vähemmän saproksyylejä (59 %) kuin ikkuna- (69 %) ja vuokapyydyksillä (71 %). Ikkunapyydykset olivat tehokkain pyydystyyppi vertailtaessa pyydysten keräämää saproksyyliyksilömäärää suhteessa aineiston käsittelyn vaatimaan aikaan. Tehokkuus (yksilöä minuutissa) ikkunapyydykselle oli 0,74, vuokapyydykselle 0,43 ja kuoppapyydykselle 0,21. Ikkunapyydyksiä ei ole aikaisemmin käytetty puun onkalon sisällä hyönteisiä pyydettäessä vaan ne ovat aikaisemmissa tutkimuksissa roikkuneet onkalon ulkopuolella. Ikkunapyydykset kuitenkin toimivat erinomaisesti myös onkaloiden sisällä. Ikkuna- sekä vuokapyydys toimivatkin selkeästi paremmin lahopuukovakuoriaisten pyynnissä verrattuna kuoppapyydykseen, jonka poisjättäminen olisi kuitenkin tuottanut huomattavasti lajiköyhemmän aineiston. Mahdollisimman monimuotoisen onttojen puiden lahopuukovakuoriaislajiston keräämiseksi tulisi käyttää ikkuna- tai vuokapyydyksiä yhdessä kuoppapyydysten kanssa.
Resumo:
An adaptive regularization algorithm that combines elementwise photon absorption and data misfit is proposed to stabilize the non-linear ill-posed inverse problem. The diffuse photon distribution is low near the target compared to the normal region. A Hessian is proposed based on light and tissue interaction, and is estimated using adjoint method by distributing the sources inside the discretized domain. As iteration progresses, the photon absorption near the inhomogeneity becomes high and carries more weightage to the regularization matrix. The domain's interior photon absorption and misfit based adaptive regularization method improves quality of the reconstructed Diffuse Optical Tomographic images.
Resumo:
Presented here, in a vector formulation, is an O(mn2) direct concise algorithm that prunes/identifies the linearly dependent (ld) rows of an arbitrary m X n matrix A and computes its reflexive type minimum norm inverse A(mr)-, which will be the true inverse A-1 if A is nonsingular and the Moore-Penrose inverse A+ if A is full row-rank. The algorithm, without any additional computation, produces the projection operator P = (I - A(mr)- A) that provides a means to compute any of the solutions of the consistent linear equation Ax = b since the general solution may be expressed as x = A(mr)+b + Pz, where z is an arbitrary vector. The rank r of A will also be produced in the process. Some of the salient features of this algorithm are that (i) the algorithm is concise, (ii) the minimum norm least squares solution for consistent/inconsistent equations is readily computable when A is full row-rank (else, a minimum norm solution for consistent equations is obtainable), (iii) the algorithm identifies ld rows, if any, and reduces concerned computation and improves accuracy of the result, (iv) error-bounds for the inverse as well as the solution x for Ax = b are readily computable, (v) error-free computation of the inverse, solution vector, rank, and projection operator and its inherent parallel implementation are straightforward, (vi) it is suitable for vector (pipeline) machines, and (vii) the inverse produced by the algorithm can be used to solve under-/overdetermined linear systems.
Resumo:
We develop in this article the first actor-critic reinforcement learning algorithm with function approximation for a problem of control under multiple inequality constraints. We consider the infinite horizon discounted cost framework in which both the objective and the constraint functions are suitable expected policy-dependent discounted sums of certain sample path functions. We apply the Lagrange multiplier method to handle the inequality constraints. Our algorithm makes use of multi-timescale stochastic approximation and incorporates a temporal difference (TD) critic and an actor that makes a gradient search in the space of policy parameters using efficient simultaneous perturbation stochastic approximation (SPSA) gradient estimates. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal policy. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The source localization algorithms in the earlier works, mostly used non-planar arrays. If we consider scenarios like human-computer communication, or human-television communication where the microphones need to be placed on the computer monitor or television front panel, i.e we need to use the planar arrays. The algorithm proposed in 1], is a Linear Closed Form source localization algorithm (LCF algorithm) which is based on Time Difference of Arrivals (TDOAs) that are obtained from the data collected using the microphones. It assumes non-planar arrays. The LCF algorithm is applied to planar arrays in the current work. The relationship between the error in the source location estimate and the perturbation in the TDOAs is derived using first order perturbation analysis and validated using simulations. If the TDOAs are erroneous, both the coefficient matrix and the data matrix used for obtaining source location will be perturbed. So, the Total least squares solution for source localization is proposed in the current work. The sensitivity analysis of the source localization algorithm for planar arrays and non-planar arrays is done by introducing perturbation in the TDOAs and the microphone locations. It is shown that the error in the source location estimate is less when we use planar array instead of the particular non-planar array considered for same perturbation in the TDOAs or microphone location. The location of the reference microphone is proved to be important for getting an accurate source location estimate if we are using the LCF algorithm.
Resumo:
The aim of this paper is to develop a computationally efficient decentralized rendezvous algorithm for a group of autonomous agents. The algorithm generalizes the notion of sensor domain and decision domain of agents to enable implementation of simple computational algorithms. Specifically, the algorithm proposed in this paper uses a rectilinear decision domain (RDD) as against the circular decision domain assumed in earlier work. Because of this, the computational complexity of the algorithm reduces considerably and, when compared to the standard Ando's algorithm available in the literature, the RDD algorithm shows very significant improvement in convergence time performance. Analytical results to prove convergence and supporting simulation results are presented in the paper.
Resumo:
In this paper, we are concerned with low-complexity detection in large multiple-input multiple-output (MIMO) systems with tens of transmit/receive antennas. Our new contributions in this paper are two-fold. First, we propose a low-complexity algorithm for large-MIMO detection based on a layered low-complexity local neighborhood search. Second, we obtain a lower bound on the maximum-likelihood (ML) bit error performance using the local neighborhood search. The advantages of the proposed ML lower bound are i) it is easily obtained for MIMO systems with large number of antennas because of the inherent low complexity of the search algorithm, ii) it is tight at moderate-to-high SNRs, and iii) it can be tightened at low SNRs by increasing the number of symbols in the neighborhood definition. Interestingly, the proposed detection algorithm based on the layered local search achieves bit error performances which are quite close to this lower bound for large number of antennas and higher-order QAM. For e. g., in a 32 x 32 V-BLAST MIMO system, the proposed detection algorithm performs close to within 1.7 dB of the proposed ML lower bound at 10(-3) BER for 16-QAM (128 bps/Hz), and close to within 4.5 dB of the bound for 64-QAM (192 bps/Hz).