949 resultados para Traffic sampling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The author explains some aspects of sampling phytoplankton blooms and the evaluation of results obtained from different methods. Qualitative and quantitative sampling is covered as well as filtration, freeze-drying and toxin separation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Northern Ireland has approximately 1670 lakes, which cover 4.4% of the land surface. However, most of the water area is accounted for by the large lakes such as Lough Neagh (385 km2) and Lower Lough Erne (109.5 km ). The majority of lakes are less than 100 hectares in area. They tend to be distributed towards the south and west of the Province, where extensive drumlin swarms are rich in small waterbodies. In 1988-1991, 610 of the 708 lakes between one and 100 hectares were sampled by the Northern Ireland Lake Survey. The objective was to assess their conservation status based on their aquatic macrophyte flora, but in addition to extensive plant surveys, the water of each lake was analysed for a range of chemical variables. This article reports on a full-scale survey carried out in early March 2002. The survey was taken with help of two helicopters. The authorise summarise the results of the chemical analysis of the survey.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new type of wave-front analysis method for the collimation testing of laser beams is proposed. A concept of wave-front height is defined, and, on this basis, the wave-front analysis method of circular aperture sampling is introduced. The wave-front height of the tested noncollimated wave can be estimated from the distance between two identical fiducial diffraction planes of the sampled wave, and then the divergence is determined. The design is detailed, and the experiment is demonstrated. The principle and experiment results of the method are presented. Owing to the simplicity of the method and its low cost, it is a promising method for checking the collimation of a laser beam with a large divergence. © 2005 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research program consisted of three major component areas: (I) development of experimental design, (II) calibration of the trawl design, and (III) development of the foundation for stock assessment analysis. The products which have I. EXPERIMENTAL DESIGN resulted from - the program are indicated below: The study was successful in identifying spatial and temporal distribution characteristics of the several key species, and the relationships between given species catches and environmental and physical factors which are thought to influence species abundance by areas within the mainstem of the Chesapeake Bay and tributaries

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first bilateral study of methods of biological sampling and biological methods of water quality assessment took place during June 1977 on selected sampling sites in the catchment of the River Trent (UK). The study was arranged in accordance with the protocol established by the joint working group responsible for the Anglo-Soviet Environmental Agreement. The main purpose of the bilateral study in Nottingham was for some of the methods of sampling and biological assessment used by UK biologists to be demonstrated to their Soviet counterparts and for the Soviet biologists to have the opportunity to test these methods at first hand in order to judge the potential of any of these methods for use within the Soviet Union. This paper is concerned with the nine river stations in the Trent catchment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method to generate new melodies, based on conserving the semiotic structure of a template piece. A pattern discovery algorithm is applied to a template piece to extract significant segments: those that are repeated and those that are transposed in the piece. Two strategies are combined to describe the semiotic coherence structure of the template piece: inter-segment coherence and intra-segment coherence. Once the structure is described it is used as a template for new musical content that is generated using a statistical model created from a corpus of bertso melodies and iteratively improved using a stochastic optimization method. Results show that the method presented here effectively describes a coherence structure of a piece by discovering repetition and transposition relations between segments, and also by representing the relations among notes within the segments. For bertso generation the method correctly conserves all intra and inter-segment coherence of the template, and the optimization method produces coherent generated melodies.