870 resultados para one step estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adapting the power of secondary users (SUs) while adhering to constraints on the interference caused to primary receivers (PRxs) is a critical issue in underlay cognitive radio (CR). This adaptation is driven by the interference and transmit power constraints imposed on the secondary transmitter (STx). Its performance also depends on the quality of channel state information (CSI) available at the STx of the links from the STx to the secondary receiver and to the PRxs. For a system in which an STx is subject to an average interference constraint or an interference outage probability constraint at each of the PRxs, we derive novel symbol error probability (SEP)-optimal, practically motivated binary transmit power control policies. As a reference, we also present the corresponding SEP-optimal continuous transmit power control policies for one PRx. We then analyze the robustness of the optimal policies when the STx knows noisy channel estimates of the links between the SU and the PRxs. Altogether, our work develops a holistic understanding of the critical role played by different transmit and interference constraints in driving power control in underlay CR and the impact of CSI on its performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Materials with widely varying molecular topologies and exhibiting liquid crystalline properties have attracted considerable attention in recent years. C-13 NMR spectroscopy is a convenient method for studying such novel systems. In this approach the assignment of the spectrum is the first step which is a non-trivial problem. Towards this end, we propose here a method that enables the carbon skeleton of the different sub-units of the molecule to be traced unambiguously. The proposed method uses a heteronuclear correlation experiment to detect pairs of nearby carbons with attached protons in the liquid crystalline core through correlation of the carbon chemical shifts to the double-quantum coherences of protons generated through the dipolar coupling between them. Supplemented by experiments that identify non-protonated carbons, the method leads to a complete assignment of the spectrum. We initially apply this method for assigning the C-13 spectrum of the liquid crystal 4-n-pentyl-4'-cyanobiphenyl oriented in the magnetic field. We then utilize the method to assign the aromatic carbon signals of a thiophene based liquid crystal thereby enabling the local order-parameters of the molecule to be estimated and the mutual orientation of the different sub-units to be obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two-step particle synthesis mechanism, also known as the Finke-Watzky (1997) mechanism, has emerged as a significant development in the field of nanoparticle synthesis. It explains a characteristic feature of the synthesis of transition metal nanoparticles, an induction period in precursor concentration followed by its rapid sigmoidal decrease. The classical LaMer theory (1950) of particle formation fails to capture this behavior. The two-step mechanism considers slow continuous nucleation and autocatalytic growth of particles directly from precursor as its two kinetic steps. In the present work, we test the two-step mechanism rigorously using population balance models. We find that it explains precursor consumption very well, but fails to explain particle synthesis. The effect of continued nucleation on particle synthesis is not suppressed sufficiently by the rapid autocatalytic growth of particles. The nucleation continues to increase breadth of size distributions to unexpectedly large values as compared to those observed experimentally. A number of variations of the original mechanism with additional reaction steps are investigated next. The simulations show that continued nucleation from the beginning of the synthesis leads to formation of highly polydisperse particles in all of the tested cases. A short nucleation window, realized with delayed onset of nucleation and its suppression soon after in one of the variations, appears as one way to explain all of the known experimental observations. The present investigations clearly establish the need to revisit the two-step particle synthesis mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a complete bipartite graph with vertex sets of cardinalities n and n', assign random weights from exponential distribution with mean 1, independently to each edge. We show that, as n -> infinity, with n' = n/alpha] for any fixed alpha > 1, the minimum weight of many-to-one matchings converges to a constant (depending on alpha). Many-to-one matching arises as an optimization step in an algorithm for genome sequencing and as a measure of distance between finite sets. We prove that a belief propagation (BP) algorithm converges asymptotically to the optimal solution. We use the objective method of Aldous to prove our results. We build on previous works on minimum weight matching and minimum weight edge cover problems to extend the objective method and to further the applicability of belief propagation to random combinatorial optimization problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Probable maximum precipitation (PMP) is a theoretical concept that is widely used by hydrologists to arrive at estimates for probable maximum flood (PMF) that find use in planning, design and risk assessment of high-hazard hydrological structures such as flood control dams upstream of populated areas. The PMP represents the greatest depth of precipitation for a given duration that is meteorologically possible for a watershed or an area at a particular time of year, with no allowance made for long-term climatic trends. Various methods are in use for estimation of PMP over a target location corresponding to different durations. Moisture maximization method and Hershfield method are two widely used methods. The former method maximizes the observed storms assuming that the atmospheric moisture would rise up to a very high value estimated based on the maximum daily dew point temperature. On the other hand, the latter method is a statistical method based on a general frequency equation given by Chow. The present study provides one-day PMP estimates and PMP maps for Mahanadi river basin based on the aforementioned methods. There is a need for such estimates and maps, as the river basin is prone to frequent floods. Utility of the constructed PMP maps in computing PMP for various catchments in the river basin is demonstrated. The PMP estimates can eventually be used to arrive at PMF estimates for those catchments. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

supporting unsteady heat flow with its ambient-humidity; invokes phase transformation of water-vapour molecule and synthesize a `moving optical-mark' at sample-ambient-interface. Under tailored condition, optical-mark exhibits a characteristic macro-scale translatory motion governed by thermal diffusivity of solid. For various step-temperature inputs via cooling, position-dependent velocities of moving optical-mark are measured at a fixed distance. A new approach is proposed. `Product of velocity of optical-mark and distance' versus `non-dimensional velocity' is plotted. The slope reveals thermal diffusivity of solid at ambient-temperature; preliminary results obtained for Quartz-glass is closely matching with literature. (C) 2016 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach is proposed to estimate the thermal diffusivity of optically transparent solids at ambient temperature based on the velocity of an effective temperature point (ETP), and by using a two-beam interferometer the proposed concept is corroborated. 1D unsteady heat flow via step-temperature excitation is interpreted as a `micro-scale rectilinear translatory motion' of an ETP. The velocity dependent function is extracted by revisiting the Fourier heat diffusion equation. The relationship between the velocity of the ETP with thermal diffusivity is modeled using a standard solution. Under optimized thermal excitation, the product of the `velocity of the ETP' and the distance is a new constitutive equation for the thermal diffusivity of the solid. The experimental approach involves the establishment of a 1D unsteady heat flow inside the sample through step-temperature excitation. In the moving isothermal surfaces, the ETP is identified using a two-beam interferometer. The arrival-time of the ETP to reach a fixed distance away from heat source is measured, and its velocity is calculated. The velocity of the ETP and a given distance is sufficient to estimate the thermal diffusivity of a solid. The proposed method is experimentally verified for BK7 glass samples and the measured results are found to match closely with the reported value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acoustic feature based speech (syllable) rate estimation and syllable nuclei detection are important problems in automatic speech recognition (ASR), computer assisted language learning (CALL) and fluency analysis. A typical solution for both the problems consists of two stages. The first stage involves computing a short-time feature contour such that most of the peaks of the contour correspond to the syllabic nuclei. In the second stage, the peaks corresponding to the syllable nuclei are detected. In this work, instead of the peak detection, we perform a mode-shape classification, which is formulated as a supervised binary classification problem - mode-shapes representing the syllabic nuclei as one class and remaining as the other. We use the temporal correlation and selected sub-band correlation (TCSSBC) feature contour and the mode-shapes in the TCSSBC feature contour are converted into a set of feature vectors using an interpolation technique. A support vector machine classifier is used for the classification. Experiments are performed separately using Switchboard, TIMIT and CTIMIT corpora in a five-fold cross validation setup. The average correlation coefficients for the syllable rate estimation turn out to be 0.6761, 0.6928 and 0.3604 for three corpora respectively, which outperform those obtained by the best of the existing peak detection techniques. Similarly, the average F-scores (syllable level) for the syllable nuclei detection are 0.8917, 0.8200 and 0.7637 for three corpora respectively. (C) 2016 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing on line travel time information to commuters has become an important issue for Advanced Traveler Information Systems and Route Guidance Systems in the past years, due to the increasing traffic volume and congestion in the road networks. Travel time is one of the most useful traffic variables because it is more intuitive than other traffic variables such as flow, occupancy or density, and is useful for travelers in decision making. The aim of this paper is to present a global view of the literature on the modeling of travel time, introducing crucial concepts and giving a thorough classification of the existing tech- niques. Most of the attention will focus on travel time estimation and travel time prediction, which are generally not presented together. The main goals of these models, the study areas and methodologies used to carry out these tasks will be further explored and categorized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random field theory has been used to model the spatial average soil properties, whereas the most widely used, geostatistics, on which also based a common basis (covariance function) has been successfully used to model and estimate natural resource since 1960s. Therefore, geostistics should in principle be an efficient way to model soil spatial variability Based on this, the paper presents an alternative approach to estimate the scale of fluctuation or correlation distance of a soil stratum by geostatistics. The procedure includes four steps calculating experimental variogram from measured data, selecting a suited theoretical variogram model, fitting the theoretical one to the experimental variogram, taking the parameters within the theoretical model obtained from optimization into a simple and finite correlation distance 6 relationship to the range a. The paper also gives eight typical expressions between a and b. Finally, a practical example was presented for showing the methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Last year, Jisc began work with EDUCAUSE - the US organisation for IT professionals in higher education - to find out the skillset of the CIO of the future. One of the findings of our project was that many aspiring technology leaders find it difficult to make the step up. Louisa Dale, director Jisc group sector intelligence, talks through the learnings and opens a call for IT professionals to get involved in the next phase of work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ENGLISH: Age composition of catch, and growth rate, of yellowfin tuna have been estimated by Hennemuth (1961a) and Davidoff (1963). The relative abundance and instantaneous total mortality rate of yellowfin tuna during 1954-1959 have been estimated by Hennenmuth (1961b). It is now possible to extend this work, because more data are available; these include data for 1951-1954, which were previously not available, and data for 1960-1962, which were collected subsequent to Hennemuth's (1961b) publication. In that publication, Hennemuth estimated the total instantaneous mortality rate (Z) during the entire time period a year class is present in the fishery following full recruitment. However, this method may lead to biased estimates of abundance, and hence mortality rates, because of both seasonal migrations into or out of specific fishing areas and possible seasonal differences in availability or vulnerability of the fish to the fishing gear. Schaefer, Chatwin and Broadhead (1961) and Joseph etl al. (1964) have indicated that seasonal migrations of yellowfin occur. A method of estimating mortality rates which is not biased by seasonal movements would be of value in computations of population dynamics. The method of analysis outlined and used in the present paper may obviate this bias by comparing the abundance of an individual yellowfin year class, following its period of maximum abundance, in an individual area during a specific quarter of the year with its abundance in the same area one year later. The method was suggested by Gulland (1955) and used by Chapman, Holt and Allen (1963) in assessing Antarctic whale stocks. This method, and the results of its use with data for yellowfin caught in the eastern tropical Pacific from 1951-1962 are described in this paper. SPANISH: La composición de edad de la captura, y la tasa de crecimiento del atún aleta amarilla, han sido estimadas por Hennemuth (1961a) y Davidoff (1963). Hennemuth (1961b), estimó la abundancia relativa y la tasa de mortalidad total instantánea del atún aleta amarilla durante 1954-1959. Se puede ampliar ahora, este trabajo, porque se dispone de más datos; éstos incluyen datos de 1951 1954, de los cuales no se disponía antes, y datos de 1960-1962 que fueron recolectados después de la publicación de Hennemuth (1961b). En esa obra, Hennemuth estimó la tasa de mortalidad total instantánea (Z) durante todo el período de tiempo en el cual una clase anual está presente en la pesquería, consecutiva al reclutamiento total. Sin embargo, este método puede conducir a estimaciones con bias (inclinación viciada) de abundancia, y de aquí las tasas de mortalidad, debidas tanto a migraciones estacionales dentro o fuera de las áreas determinadas de pesca, como a posibles diferencias estacionales en la disponibilidad y vulnerabilidad de los peces al equipo de pesca. Schaefer, Chatwin y Broadhead (1961) y Joseph et al. (1964) han indicado que ocurren migraciones estacionales de atún aleta amarilla. Un método para estimar las tasas de mortalidad el cual no tuviera bias debido a los movimientos estacionales, sería de valor en los cómputos de la dinámica de las poblaciones. El método de análisis delineado y usado en el presente estudio puede evitar este bias al comparar la abundancia de una clase anual individual de atún aleta amarilla, subsecuente a su período de abundancia máxima en un área individual, durante un trimestre específico del año, con su abundancia en la misma área un año más tarde. Este método fue sugerido por Gulland (1955) y empleado por Chapman, Holt y Allen (1963) en la declaración de los stocks de la ballena antártica. Este método y los resultados de su uso, en combinación con los datos del atún aleta amarilla capturado en el Pacífico oriental tropical desde 1951-1962, son descritos en este estudio.