870 resultados para one step estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A learning based framework is proposed for estimating human body pose from a single image. Given a differentiable function that maps from pose space to image feature space, the goal is to invert the process: estimate the pose given only image features. The inversion is an ill-posed problem as the inverse mapping is a one to many process. Hence multiple solutions exist, and it is desirable to restrict the solution space to a smaller subset of feasible solutions. For example, not all human body poses are feasible due to anthropometric constraints. Since the space of feasible solutions may not admit a closed form description, the proposed framework seeks to exploit machine learning techniques to learn an approximation that is smoothly parameterized over such a space. One such technique is Gaussian Process Latent Variable Modelling. Scaled conjugate gradient is then used find the best matching pose in the space of feasible solutions when given an input image. The formulation allows easy incorporation of various constraints, e.g. temporal consistency and anthropometric constraints. The performance of the proposed approach is evaluated in the task of upper-body pose estimation from silhouettes and compared with the Specialized Mapping Architecture. The estimation accuracy of the Specialized Mapping Architecture is at least one standard deviation worse than the proposed approach in the experiments with synthetic data. In experiments with real video of humans performing gestures, the proposed approach produces qualitatively better estimation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of ultra high speed (~20 Gsamples/s) analogue to digital converters (ADCs), and the delayed deployment of 40 Gbit/s transmission due to the economic downturn, has stimulated the investigation of digital signal processing (DSP) techniques for compensation of optical transmission impairments. In the future, DSP will offer an entire suite of tools to compensate for optical impairments and facilitate the use of advanced modulation formats. Chromatic dispersion is a very significant impairment for high speed optical transmission. This thesis investigates a novel electronic method of dispersion compensation which allows for cost-effective accurate detection of the amplitude and phase of the optical field into the radio frequency domain. The first electronic dispersion compensation (EDC) schemes accessed only the amplitude information using square law detection and achieved an increase in transmission distances. This thesis presents a method by using a frequency sensitive filter to estimate the phase of the received optical field and, in conjunction with the amplitude information, the entire field can be digitised using ADCs. This allows DSP technologies to take the next step in optical communications without requiring complex coherent detection. This is of particular of interest in metropolitan area networks. The full-field receiver investigated requires only an additional asymmetrical Mach-Zehnder interferometer and balanced photodiode to achieve a 50% increase in EDC reach compared to amplitude only detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copper dimethylamino-2-propoxide [Cu(dmap)2] is used as a precursor for low-temperature atomic layer deposition (ALD) of copper thin films. Chemisorption of the precursor is the necessary first step of ALD, but it is not known in this case whether there is selectivity for adsorption sites, defects, or islands on the substrate. Therefore, we study the adsorption of the Cu(dmap)2 molecule on the different sites on flat and rough Cu surfaces using PBE, PBE-D3, optB88-vdW, and vdW-DF2 methods. We found the relative order of adsorption energies for Cu(dmap)2 on Cu surfaces is Eads (PBE-D3) > Eads (optB88-vdW) > Eads (vdW-DF2) > Eads (PBE). The PBE and vdW-DF2 methods predict one chemisorption structure, while optB88-vdW predicts three chemisorption structures for Cu(dmap)2 adsorption among four possible adsorption configurations, whereas PBE-D3 predicts a chemisorbed structure for all the adsorption sites on Cu(111). All the methods with and without van der Waals corrections yield a chemisorbed molecule on the Cu(332) step and Cu(643) kink because of less steric hindrance on the vicinal surfaces. Strong distortion of the molecule and significant elongation of Cu–N bonds are predicted in the chemisorbed structures, indicating that the ligand–Cu bonds break during the ALD of Cu from Cu(dmap)2. The molecule loses its initial square-planar structure and gains linear O–Cu–O bonding as these atoms attach to the surface. As a result, the ligands become unstable and the precursor becomes more reactive to the coreagent. Charge redistribution mainly occurs between the adsorbate O–Cu–O bond and the surface. Bader charge analysis shows that electrons are donated from the surface to the molecule in the chemisorbed structures, so that the Cu center in the molecule is partially reduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytical model was developed to describe in-canopy vertical distribution of ammonia (NH(3)) sources and sinks and vertical fluxes in a fertilized agricultural setting using measured in-canopy mean NH(3) concentration and wind speed profiles. This model was applied to quantify in-canopy air-surface exchange rates and above-canopy NH(3) fluxes in a fertilized corn (Zea mays) field. Modeled air-canopy NH(3) fluxes agreed well with independent above-canopy flux estimates. Based on the model results, the urea fertilized soil surface was a consistent source of NH(3) one month following the fertilizer application, whereas the vegetation canopy was typically a net NH(3) sink with the lower portion of the canopy being a constant sink. The model results suggested that the canopy was a sink for some 70% of the estimated soil NH(3) emissions. A logical conclusion is that parametrization of within-canopy processes in air quality models are necessary to explore the impact of agricultural field level management practices on regional air quality. Moreover, there are agronomic and environmental benefits to timing liquid fertilizer applications as close to canopy closure as possible. Finally, given the large within-canopy mean NH(3) concentration gradients in such agricultural settings, a discussion about the suitability of the proposed model is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of the skeleton of a directed acyclic graph (DAG) is of great importance for understanding the underlying DAG and causal effects can be assessed from the skeleton when the DAG is not identifiable. We propose a novel method named PenPC to estimate the skeleton of a high-dimensional DAG by a two-step approach. We first estimate the nonzero entries of a concentration matrix using penalized regression, and then fix the difference between the concentration matrix and the skeleton by evaluating a set of conditional independence hypotheses. For high-dimensional problems where the number of vertices p is in polynomial or exponential scale of sample size n, we study the asymptotic property of PenPC on two types of graphs: traditional random graphs where all the vertices have the same expected number of neighbors, and scale-free graphs where a few vertices may have a large number of neighbors. As illustrated by extensive simulations and applications on gene expression data of cancer patients, PenPC has higher sensitivity and specificity than the state-of-the-art method, the PC-stable algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Orthogonal frequency division multiplexing (OFDM) systems are more sensitive to carrier frequency offset (CFO) compared to the conventional single carrier systems. CFO destroys the orthogonality among subcarriers, resulting in inter-carrier interference (ICI) and degrading system performance. To mitigate the effect of the CFO, it has to be estimated and compensated before the demodulation. The CFO can be divided into an integer part and a fractional part. In this paper, we investigate a maximum-likelihood estimator (MLE) for estimating the integer part of the CFO in OFDM systems, which requires only one OFDM block as the pilot symbols. To reduce the computational complexity of the MLE and improve the bandwidth efficiency, a suboptimum estimator (Sub MLE) is studied. Based on the hypothesis testing method, a threshold Sub MLE (T-Sub MLE) is proposed to further reduce the computational complexity. The performance analysis of the proposed T-Sub MLE is obtained and the analytical results match the simulation results well. Numerical results show that the proposed estimators are effective and reliable in both additive white Gaussian noise (AWGN) and frequency-selective fading channels in OFDM systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the biological sciences, stereological techniques are frequently used to infer changes in structural parameters (volume fraction, for example) between samples from different populations or subject to differing treatment regimes. Non-homogeneity of these parameters is virtually guaranteed, both between experimental animals and within the organ under consideration. A two-stage strategy is then desirable, the first stage involving unbiased estimation of the required parameter, separately for each experimental unit, the latter being defined as a subset of the organ for which homogeneity can reasonably be assumed. In the second stage, these point estimates are used as data inputs to a hierarchical analysis of variance, to distinguish treatment effects from variability between animals, for example. Techniques are therefore required for unbiased estimation of parameters from potentially small numbers of sample profiles. This paper derives unbiased estimates of linear properties in one special case—the sampling of spherical particles by transmission microscopy, when the section thickness is not negligible and the resulting circular profiles are subject to lower truncation. The derivation uses the general integral equation formulation of Nicholson (1970); the resulting formulae are simplified, algebraically, and their efficient computation discussed. Bias arising from variability in slice thickness is shown to be negligible in typical cases. The strategy is illustrated for data examining the effects, on the secondary lysosomes in the digestive cells, of exposure of the common mussel to hydrocarbons. Prolonged exposure, at 30 μg 1−1 total oil-derived hydrocarbons, is seen to increase the average volume of a lysosome, and the volume fraction that lysosomes occupy, but to reduce their number.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of measuring high frequency variations in temperature is described, and the need for some form of reconstruction introduced. One method of reconstructing temperature measurements is to use the signals from two thermocouples of differing diameter. Two existing methods for processing such measurements and reconstructing the higher frequency components are described. These are compared to a novel reconstruction algorithm based on a nonlinear extended Kalman filter. The performance of this filter is found to compare favorably, in a number of ways, with the existing techniques, and it is suggested that such a technique would be viable for the online reconstruction of temperatures in real time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of new video standards such as MPEG-4 part-10 and H.264/H.26L, demands for advanced video coding, particularly in the area of variable block size video motion estimation (VBSME), are increasing. In this paper, we propose a new one-dimensional (1-D) very large-scale integration architecture for full-search VBSME (FSVBSME). The VBS sum of absolute differences (SAD) computation is performed by re-using the results of smaller sub-block computations. These are distributed and combined by incorporating a shuffling mechanism within each processing element. Whereas a conventional 1-D architecture can process only one motion vector (MV), this new architecture can process up to 41 MV sub-blocks (within a macroblock) in the same number of clock cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A one-electron oxidation of a methionine residue is thought to be a key step in the neurotoxicity of the beta amyloid peptide of Alzheimer's disease. The chemistry of the radical cation of N-formylmethioninamide (11+) and two model systems, dimethyl sulfide (1+) and ethyl methyl sulfide (6+), in the presence of oxygen have been studied by B3LYP/6-31G(d) and CBS-RAD calculations. The stable form of 11+ has a three-electron bond between the sulfur radical cation and the carbonyl oxygen atom of the i - 1 residue. The radical cation may lose a proton from the methyl or methylene groups flanking the oxidized sulfur. Both 11+ and the resultant C-centered radicals may add oxygen to form peroxy radicals. The calculations indicate that unlike C-centered radicals the sulfur radical cation does not form a covalent bond to oxygen but rather forms a loose ion-induced dipole complex with an S-O separation of about 2.7 Å, and is bound by about 13 kJ mol-1 (on the basis of 1+ + O2). Direct intramolecular abstraction of an H atom from the C site is unlikely. It is endothermic by more than 20 kJ mol-1 and involves a high barrier (G = 79 kJ mol-1). The -to-S C-centered radicals will add oxygen to form peroxy radicals. The OH BDEs of the parent hydroperoxides are in the range of 352-355 kJ mol-1, similar to SH BDEs (360 kJ mol-1) and C-H BDEs (345-350 kJ mol-1). Thus, the peroxy radicals are oxidizing species comparable in strength to thiyl radicals and peptide backbone C-centered radicals. Each peroxy radical can abstract a hydrogen atom from the backbone C site of the Met residue to yield the corresponding C-centered radical/hydroperoxide in a weakly exothermic process with modest barriers in the range of 64-92 kJ mol-1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Kidney Disease Outcomes Quality Initiative (KDOQI) chronic kidney disease (CKD) guidelines have focused on the utility of using the modified four-variable MDRD equation (now traceable by isotope dilution mass spectrometry IDMS) in calculating estimated glomerular filtration rates (eGFRs). This study assesses the practical implications of eGFR correction equations on the range of creatinine assays currently used in the UK and further investigates the effect of these equations on the calculated prevalence of CKD in one UK region Methods. Using simulation, a range of creatinine data (30–300 µmol/l) was generated for male and female patients aged 20–100 years. The maximum differences between the IDMS and MDRD equations for all 14 UK laboratory techniques for serum creatinine measurement were explored with an average of individual eGFRs calculated according to MDRD and IDMS 30 ml/min/1.73 m2. Observed data for 93,870 patients yielded a first MDRD eGFR 3 months later of which 47 093 (71%) continued to have an eGFR

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electrochemical oxidation of N,N,N',N'-tetramethyl-p-phenylenediamine (TMPD) has been studied by cyclic voltammetry and potential step chronoamperometry at 303 K in five ionic liquids, namely [C(2)mim] [NTf2], [C(4)mim] [NTf2] [C(4)mpyrr] [NTf2] [C(4)mim] [BF4], and [C(4)mim] [PF6] (where [C(n)mim](+) = 1-alkyl-3-methylimidazolium, [C(4)mpyrr](+) = N-butyl-N-methylpyrrolidinium, [NTf2](-) = bis(trifluoromethylsulfonyl)imide, [BF4](-) = tetrafluoroborate, and [PF6](-) = hexafluorophosphate). Diffusion coefficients, D, of 4.87, 3.32, 2.05, 1.74, and 1.34 x 10(-11) m(2) s(-1) and heterogeneous electron-transfer rate constants, k(0), of 0.0109, 0.0103, 0.0079, 0.0066, and 0.0059 cm s(-1) were calculated for TMPD in [C(2)mim] [NTf2], [C(4)mim] [NTf2], [C(4)mpyrr] [NTf2], [C(4)mim] [BF4], and [C(4)mim] [PF6], respectively, at 303 K. The oxidation of TMPD in [C4mim][PF6] was also carried out at increasing temperatures from 303 to 343 K, with an activation energy for diffusion of 32.3 kJ mol(-1). k(0) was found to increase systematically with increasing temperature, and an activation energy of 31.4 kJ mol(-1) was calculated. The study was extended to six other p-phenylenediamines with alkyl/phenyl group substitutions. D and k(0) values were calculated for these compounds in [C(2)mim] [NTf2], and it was found that k(0) showed no obvious relationship with the hydrodynamic radius, r.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality of Service (QoS) support in IEEE 802.11-based ad hoc networks relies on the networks’ ability to estimate the available bandwidth on a given link. However, no mechanism has been standardized to accurately evaluate this resource. This remains one of the main issues open to research in this field. This paper proposes an available bandwidth estimation approach which achieves more accurate estimation when compared to existing research. The proposed approach differentiates the channel busy caused by transmitting or receiving from that caused by carrier sensing, and thus improves the accuracy of estimating the overlap probability of two adjacent nodes’ idle time. Simulation results testify the improvement of this approach when compared with well known bandwidth estimation methods in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.