990 resultados para Zerocrossing sampling theory,
Resumo:
This paper considers the problem of identifying the footprints of communication of multiple transmitters in a given geographical area. To do this, a number of sensors are deployed at arbitrary but known locations in the area, and their individual decisions regarding the presence or absence of the transmitters' signal are combined at a fusion center to reconstruct the spatial spectral usage map. One straightforward scheme to construct this map is to query each of the sensors and cluster the sensors that detect the primary's signal. However, using the fact that a typical transmitter footprint map is a sparse image, two novel compressive sensing based schemes are proposed, which require significantly fewer number of transmissions compared to the querying scheme. A key feature of the proposed schemes is that the measurement matrix is constructed from a pseudo-random binary phase shift applied to the decision of each sensor prior to transmission. The measurement matrix is thus a binary ensemble which satisfies the restricted isometry property. The number of measurements needed for accurate footprint reconstruction is determined using compressive sampling theory. The three schemes are compared through simulations in terms of a performance measure that quantifies the accuracy of the reconstructed spatial spectral usage map. It is found that the proposed sparse reconstruction technique-based schemes significantly outperform the round-robin scheme.
Resumo:
随着研究工作的逐步深入,目前已经利用经典热光源实现了关联衍射成像,使得该技术有望在X射线以及中子衍射成像等方面得到广泛应用。在实验利用非相干光得到物体无透镜傅里叶变换频谱的基础上,采用误差消除与输入输出恢复算法,并结合过采样理论,实现了实验所用物体透射率函数的恢复。分别得到了纯振幅物体的振幅分布函数与纯相位物体的相位分布函数。此外,还讨论了实验所得傅里叶变换频谱的噪声等因素对图像恢复结果的影响。
Resumo:
A location- and scale-invariant predictor is constructed which exhibits good probability matching for extreme predictions outside the span of data drawn from a variety of (stationary) general distributions. It is constructed via the three-parameter {\mu, \sigma, \xi} Generalized Pareto Distribution (GPD). The predictor is designed to provide matching probability exactly for the GPD in both the extreme heavy-tailed limit and the extreme bounded-tail limit, whilst giving a good approximation to probability matching at all intermediate values of the tail parameter \xi. The predictor is valid even for small sample sizes N, even as small as N = 3. The main purpose of this paper is to present the somewhat lengthy derivations which draw heavily on the theory of hypergeometric functions, particularly the Lauricella functions. Whilst the construction is inspired by the Bayesian approach to the prediction problem, it considers the case of vague prior information about both parameters and model, and all derivations are undertaken using sampling theory.
Resumo:
首先介绍了月表采样的新特点与存在的困难;根据月表环境、月表样品的特点以及嫦娥三期工程中对月表采样的要求,合理地选择了采样方式,研制出了一台六自由度机器人化月表采样器,并且对采样器的采样原理,机械系统和运动原理做了详细的分析;最后在石灰粉上做了采样实验,验证了多功能的机器人化月表采样器的基本功能与可行性。
Resumo:
This paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. We describe a new genetic algorithm, the merged genetic algorithm, and prove that for the class of monotonic functions, the algorithm finds the optimal solution, and does so with an exponential convergence rate. The analysis pertains to the ideal behavior of the algorithm where the main task reduces to showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. We take exponential convergence to be indicative of efficient solvability for the sample-bounded algorithm, although a sampling theory is needed to better relate the limit behavior to actual behavior. The paper concludes with a discussion of some immediate problems that lie ahead.
Resumo:
Discusses the influence of Nyquist sampling theory on carrier communication simulations.
Resumo:
El presente artículo busca describir las formas identitarias en la región fronteriza de Táchira (Venezuela) ― Norte de Santander (Colombia). El diseño de la investigación se corresponde con la metodología cualitativa, y adopta como método la Teoría Fundamentada; se aplicó el muestreo teórico en combinación con la saturación teórica para analizar las entrevistas de un conjunto de informantes clave ubicados a ambos lados del límite internacional. Así mismo, se hizo una revisión de la literatura especializada sobre identidad y formas de identidad contemplando los aportes de diversas disciplinas científicas. Los datos son analizados siguiendo la línea de codificación teórica (abierta, axial y selectiva) y la inducción analítica para revelar conceptos emergentes y desvelar sus vinculaciones. Los resultados evidencian elementos comunes y divergentes en las formas identitarias: la “identidad personal” (elementos de autopercepción, parentescos, doble identidad y diferenciación identitaria), la “identidad fronteriza” (la región se percibe como diferente del resto), la “identidad nacional” (historia, conciencia del límite y símbolos), los cuales coexisten en los habitantes de frontera. Los “aspectos actitudinales” muestran aceptación y rechazo frente al desempeño de las principales instituciones presentes en la dinámica fronteriza. Las formas identitarias encontradas en la región confirman los estudios sobre identidad y fronteras, aunque es de destacar que la experiencia del límite internacional no alcanza a desdibujar las percepciones nacionales estereotipadas del otro.
Resumo:
Sampling strategies for monitoring the status and trends in wildlife populations are often determined before the first survey is undertaken. However, there may be little information about the distribution of the population and so the sample design may be inefficient. Through time, as data are collected, more information about the distribution of animals in the survey region is obtained but it can be difficult to incorporate this information in the survey design. This paper introduces a framework for monitoring motile wildlife populations within which the design of future surveys can be adapted using data from past surveys whilst ensuring consistency in design-based estimates of status and trends through time. In each survey, part of the sample is selected from the previous survey sample using simple random sampling. The rest is selected with inclusion probability proportional to predicted abundance. Abundance is predicted using a model constructed from previous survey data and covariates for the whole survey region. Unbiased design-based estimators of status and trends and their variances are derived from two-phase sampling theory. Simulations over the short and long-term indicate that in general more precise estimates of status and trends are obtained using this mixed strategy than a strategy in which all of the sample is retained or all selected with probability proportional to predicted abundance. Furthermore the mixed strategy is robust to poor predictions of abundance. Estimates of status are more precise than those obtained from a rotating panel design.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.
Resumo:
The classical Kramer sampling theorem provides a method for obtaining orthogonal sampling formulas. In particular, when the involved kernel is analytic in the sampling parameter it can be stated in an abstract setting of reproducing kernel Hilbert spaces of entire functions which includes as a particular case the classical Shannon sampling theory. This abstract setting allows us to obtain a sort of converse result and to characterize when the sampling formula associated with an analytic Kramer kernel can be expressed as a Lagrange-type interpolation series. On the other hand, the de Branges spaces of entire functions satisfy orthogonal sampling formulas which can be written as Lagrange-type interpolation series. In this work some links between all these ideas are established.
Resumo:
Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.
Resumo:
Apresenta·se um breve resumo histórico da evolução da amostragem por transectos lineares e desenvolve·se a sua teoria. Descrevemos a teoria de amostragem por transectos lineares, proposta por Buckland (1992), sendo apresentados os pontos mais relevantes, no que diz respeito à modelação da função de detecção. Apresentamos uma descrição do princípio CDM (Rissanen, 1978) e a sua aplicação à estimação de uma função densidade por um histograma (Kontkanen e Myllymãki, 2006), procedendo à aplicação de um exemplo prático, recorrendo a uma mistura de densidades. Procedemos à sua aplicação ao cálculo do estimador da probabilidade de detecção, no caso dos transectos lineares e desta forma estimar a densidade populacional de animais. Analisamos dois casos práticos, clássicos na amostragem por distâncias, comparando os resultados obtidos. De forma a avaliar a metodologia, simulámos vários conjuntos de observações, tendo como base o exemplo das estacas, recorrendo às funções de detecção semi-normal, taxa de risco, exponencial e uniforme com um cosseno. Os resultados foram obtidos com o programa DISTANCE (Thomas et al., in press) e um algoritmo escrito em linguagem C, cedido pelo Professor Doutor Petri Kontkanen (Departamento de Ciências da Computação, Universidade de Helsínquia). Foram desenvolvidos programas de forma a calcular intervalos de confiança recorrendo à técnica bootstrap (Efron, 1978). São discutidos os resultados finais e apresentadas sugestões de desenvolvimentos futuros. ABSTRACT; We present a brief historical note on the evolution of line transect sampling and its theoretical developments. We describe line transect sampling theory as proposed by Buckland (1992), and present the most relevant issues about modeling the detection function. We present a description of the CDM principle (Rissanen, 1978) and its application to histogram density estimation (Kontkanen and Myllymãki, 2006), with a practical example, using a mixture of densities. We proceed with the application and estimate probability of detection and animal population density in the context of line transect sampling. Two classical examples from the literature are analyzed and compared. ln order to evaluate the proposed methodology, we carry out a simulation study based on a wooden stakes example, and using as detection functions half normal, hazard rate, exponential and uniform with a cosine term. The results were obtained using program DISTANCE (Thomas et al., in press), and an algorithm written in C language, kindly offered by Professor Petri Kontkanen (Department of Computer Science, University of Helsinki). We develop some programs in order to estimate confidence intervals using the bootstrap technique (Efron, 1978). Finally, the results are presented and discussed with suggestions for future developments.
Resumo:
We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.
Resumo:
The density and distribution of spatial samples heavily affect the precision and reliability of estimated population attributes. An optimization method based on Mean of Surface with Nonhomogeneity (MSN) theory has been developed into a computer package with the purpose of improving accuracy in the global estimation of some spatial properties, given a spatial sample distributed over a heterogeneous surface; and in return, for a given variance of estimation, the program can export both the optimal number of sample units needed and their appropriate distribution within a specified research area. (C) 2010 Elsevier Ltd. All rights reserved.