935 resultados para Shannon Sampling Theorem
Resumo:
In this paper we introduce a type of Hypercomplex Fourier Series based on Quaternions, and discuss on a Hypercomplex version of the Square of the Error Theorem. Since their discovery by Hamilton (Sinegre [1]), quaternions have provided beautifully insights either on the structure of different areas of Mathematics or in the connections of Mathematics with other fields. For instance: I) Pauli spin matrices used in Physics can be easily explained through quaternions analysis (Lan [2]); II) Fundamental theorem of Algebra (Eilenberg [3]), which asserts that the polynomial analysis in quaternions maps into itself the four dimensional sphere of all real quaternions, with the point infinity added, and the degree of this map is n. Motivated on earlier works by two of us on Power Series (Pendeza et al. [4]), and in a recent paper on Liouville’s Theorem (Borges and Mar˜o [5]), we obtain an Hypercomplex version of the Fourier Series, which hopefully can be used for the treatment of hypergeometric partial differential equations such as the dumped harmonic oscillation.
Resumo:
The focus of this paper is to address some classical results for a class of hypercomplex numbers. More specifically we present an extension of the Square of the Error Theorem and a Bessel inequality for octonions.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The steady-state average run length is used to measure the performance of the recently proposed synthetic double sampling (X) over bar chart (synthetic DS chart). The overall performance of the DS X chart in signaling process mean shifts of different magnitudes does not improve when it is integrated with the conforming run length chart, except when the integrated charts are designed to offer very high protection against false alarms, and the use of large samples is prohibitive. The synthetic chart signals when a second point falls beyond the control limits, no matter whether one of them falls above the centerline and the other falls below it; with the side-sensitive feature, the synthetic chart does not signal when they fall on opposite sides of the centerline. We also investigated the steady-state average run length of the side-sensitive synthetic DS X chart. With the side-sensitive feature, the overall performance of the synthetic DS X chart improves, but not enough to outperform the non-synthetic DS X chart. Copyright (C) 2014 John Wiley &Sons, Ltd.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The new result presented here is a theorem involving series in the three-parameter Mittag-Le er function. As a by-product, we recover some known results and discuss corollaries. As an application, we obtain the solution of a fractional di erential equation associated with a RLC electrical circuit in a closed form, in terms of the two-parameter Mittag-Le er function.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Killer whale (Orcinus orca Linnaeus, 1758) abundance in the North Pacific is known only for a few populations for which extensive longitudinal data are available, with little quantitative data from more remote regions. Line-transect ship surveys were conducted in July and August of 2001–2003 in coastal waters of the western Gulf of Alaska and the Aleutian Islands. Conventional and Multiple Covariate Distance Sampling methods were used to estimate the abundance of different killer whale ecotypes, which were distinguished based upon morphological and genetic data. Abundance was calculated separately for two data sets that differed in the method by which killer whale group size data were obtained. Initial group size (IGS) data corresponded to estimates of group size at the time of first sighting, and post-encounter group size (PEGS) corresponded to estimates made after closely approaching sighted groups.
Resumo:
Classical sampling methods can be used to estimate the mean of a finite or infinite population. Block kriging also estimates the mean, but of an infinite population in a continuous spatial domain. In this paper, I consider a finite population version of block kriging (FPBK) for plot-based sampling. The data are assumed to come from a spatial stochastic process. Minimizing mean-squared-prediction errors yields best linear unbiased predictions that are a finite population version of block kriging. FPBK has versions comparable to simple random sampling and stratified sampling, and includes the general linear model. This method has been tested for several years for moose surveys in Alaska, and an example is given where results are compared to stratified random sampling. In general, assuming a spatial model gives three main advantages over classical sampling: (1) FPBK is usually more precise than simple or stratified random sampling, (2) FPBK allows small area estimation, and (3) FPBK allows nonrandom sampling designs.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.