999 resultados para geometric arrays


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal power-delay tradeoff is studied for a time-slotted independently and identically distributed fading point-to-point link, with perfect channel state information at both transmitter and receiver, and with random packet arrivals to the transmitter queue. It is assumed that the transmitter can control the number of packets served by controlling the transmit power in the slot. The optimal tradeoff between average power and average delay is analyzed for stationary and monotone transmitter policies. For such policies, an asymptotic lower bound on the minimum average delay of the packets is obtained, when average transmitter power approaches the minimum average power required for transmitter queue stability. The asymptotic lower bound on the minimum average delay is obtained from geometric upper bounds on the stationary distribution of the queue length. This approach, which uses geometric upper bounds, also leads to an intuitive explanation of the asymptotic behavior of average delay. The asymptotic lower bounds, along with previously known asymptotic upper bounds, are used to identify three new cases where the order of the asymptotic behavior differs from that obtained from a previously considered approximate model, in which the transmit power is a strictly convex function of real valued service batch size for every fade state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on a quantum dot sensitized solar cell (QDSSC) based on ZnO nanorod coated vertically aligned carbon nanotubes (VACNTs). Electrochemical impedance spectroscopy shows that the electron lifetime for the device based on VACNT/ZnO/CdSe is longer than that for a device based on ZnO/CdSe, indicating that the charge recombination at the interface is reduced by the presence of the VACNTs. Due to the increased surface area and longer electron lifetime, a power conversion efficiency of 1.46% is achieved for the VACNT/ZnO/CdSe devices under an illumination of one Sun (AM 1.5G, 100 mW/cm2). © 2010 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Until quite recently our understanding of the basic mechanical process responsible for earthquakes and faulting was not well known. It can be argued that this was partly a consequence of the complex nature of fracture in crust and in part because evidence of brittle phenomena in the natural laboratory of the earth is often obliterated or obscured by other geological processes. While it is well understood that the spatial and temporal complexity of earthquakes and the fault structures emerge from geometrical and material built-in heterogeneities, one important open question is how the shearing becomes localized into a band of intense fractures. Here the authors address these questions through a numerical approach of a tectonic plate by considering rockmass heterogeneity both in microscopic scale and in mesoscopic scale. Numerical simulations of the progressive failure leading to collapse under long-range slow driving forces in the far-field show earthquake-like rupture behavior. $En Echelon$ crack-arrays are reproduced in the numerical simulation. It is demonstrated that the underlying fracturing induced acoustic emissions (or seismic events) display self-organized criticality------from disorder to order. The seismic cycles and the geometric structures of the fracture faces, which are found greatly depending on the material heterogeneity (especially on the macroscopic scale), agree with that observed experimentally in real brittle materials. It is concluded that in order to predict a main shock, one must have extremely detailed knowledge on very minor features of the earth's crust far from the place where the earthquake originated. If correct, the model proposed here seemingly provides an explanation as to why earthquakes to date are not predicted so successfully. The reason is not that the authors do not understand earthquake mechanisms very well but that they still know little about our earth's crust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a beamforming correction for identifying dipole sources by means of phased microphone array measurements is presented and implemented numerically and experimentally. Conventional beamforming techniques, which are developed for monopole sources, can lead to significant errors when applied to reconstruct dipole sources. A previous correction technique to microphone signals is extended to account for both source location and source power for two-dimensional microphone arrays. The new dipole-beamforming algorithm is developed by modifying the basic source definition used for beamforming. This technique improves the previous signal correction method and yields a beamformer applicable to sources which are suspected to be dipole in nature. Numerical simulations are performed, which validate the capability of this beamformer to recover ideal dipole sources. The beamforming correction is applied to the identification of realistic aeolian-tone dipoles and shows an improvement of array performance on estimating dipole source powers. © 2008 Acoustical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A turbulent boundary-layer flow over a rough wall generates a dipole sound field as the near-field hydrodynamic disturbances in the turbulent boundary-layer scatter into radiated sound at small surface irregularities. In this paper, phased microphone arrays are applied to the measurement and simulation of surface roughness noise. The radiated sound from two rough plates and one smooth plate in an open jet is measured at three streamwise locations, and the beamforming source maps demonstrate the dipole directivity. Higher source strengths can be observed on the rough plates which also enhance the trailing-edge noise. A prediction scheme in previous theoretical work is used to describe the strength of a distribution of incoherent dipoles and to simulate the sound detected by the microphone array. Source maps of measurement and simulation exhibit satisfactory similarities in both source pattern and source strength, which confirms the dipole nature and the predicted magnitude of roughness noise. However, the simulations underestimate the streamwise gradient of the source strengths and overestimate the source strengths at the highest frequency. © 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method of rapidly producing computer-generated holograms that exhibit geometric occlusion in the reconstructed image. Conceptually, a bundle of rays is shot from every hologram sample into the object volume.We use z buffering to find the nearest intersecting object point for every ray and add its complex field contribution to the corresponding hologram sample. Each hologram sample belongs to an independent operation, allowing us to exploit the parallel computing capability of modern programmable graphics processing units (GPUs). Unlike algorithms that use points or planar segments as the basis for constructing the hologram, our algorithm's complexity is dependent on fixed system parameters, such as the number of ray-casting operations, and can therefore handle complicated models more efficiently. The finite number of hologram pixels is, in effect, a windowing function, and from analyzing the Wigner distribution function of windowed free-space transfer function we find an upper limit on the cone angle of the ray bundle. Experimentally, we found that an angular sampling distance of 0:01' for a 2:66' cone angle produces acceptable reconstruction quality. © 2009 Optical Society of America.