140 resultados para least weighted squares


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The local structural information in the near-neighbor region of superionic conducting glass (AgBr)0.4(Ag2O)0.3(GeO2)0.3 has been estimated from the anomalous X-ray scattering (AXS) measurements using Ge and Br K absorption edges. The possible atomic arrangements in the near-neighbor region of this glass were obtained by coupling the results with the least-squares variational method so as to reproduce two differential intensity profiles for Ge and Br as well as the ordinary scattering profile. The coordination number of oxygen around Ge is found to be 3.6 at a distance of 0.176 nm, suggesting the GeO4 tetrahedral unit as the probable structural entity in this glass. Moreover, the coordination number of Ag around Br is estimated as 6.3 at a distance of 0.284 nm, suggesting an arrangement similar to that in crystalline AgBr.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Syntheses of manganese(I)-based molecular squares have been accomplished in facile one-pot reaction conditions at room temperature. Self-assembly of eight components has resulted in the formation of M4L4-type metallacyclophanes [Mn(CO)(3)Br(mu-L)(4) (1-3) using pentacarbonylbromomanganese as metal precursor and rigid azine ligands such as pyrazine, 4,4'-bipyridine, and trans-1,2-bis(4pyridyl)ethylene, respectively, as bridging ligands. The metallacyclophanes have been characterized on the basis of IR, NMR, and UV-vis spectroscopic techniques and single-crystal X-ray diffraction methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background & objectives: There is a need to develop an affordable and reliable tool for hearing screening of neonates in resource constrained, medically underserved areas of developing nations. This study valuates a strategy of health worker based screening of neonates using a low cost mechanical calibrated noisemaker followed up with parental monitoring of age appropriate auditory milestones for detecting severe-profound hearing impairment in infants by 6 months of age. Methods: A trained health worker under the supervision of a qualified audiologist screened 425 neonates of whom 20 had confirmed severe-profound hearing impairment. Mechanical calibrated noisemakers of 50, 60, 70 and 80 dB (A) were used to elicit the behavioural responses. The parents of screened neonates were instructed to monitor the normal language and auditory milestones till 6 months of age. This strategy was validated against the reference standard consisting of a battery of tests - namely, auditory brain stem response (ABR), otoacoustic emissions (OAE) and behavioural assessment at 2 years of age. Bayesian prevalence weighted measures of screening were calculated. Results: The sensitivity and specificity was high with least false positive referrals for. 70 and 80 dB (A) noisemakers. All the noisemakers had 100 per cent negative predictive value. 70 and 80 dB (A) noisemakers had high positive likelihood ratios of 19 and 34, respectively. The probability differences for pre- and post- test positive was 43 and 58 for 70 and 80 dB (A) noisemakers, respectively. Interpretation & conclusions: In a controlled setting, health workers with primary education can be trained to use a mechanical calibrated noisemaker made of locally available material to reliably screen for severe-profound hearing loss in neonates. The monitoring of auditory responses could be done by informed parents. Multi-centre field trials of this strategy need to be carried out to examine the feasibility of community health care workers using it in resource constrained settings of developing nations to implement an effective national neonatal hearing screening programme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, an extension to the total variation diminishing finite volume formulation of the lattice Boltzmann equation method on unstructured meshes was presented. The quadratic least squares procedure is used for the estimation of first-order and second-order spatial gradients of the particle distribution functions. The distribution functions were extrapolated quadratically to the virtual upwind node. The time integration was performed using the fourth-order RungeKutta procedure. A grid convergence study was performed in order to demonstrate the order of accuracy of the present scheme. The formulation was validated for the benchmark two-dimensional, laminar, and unsteady flow past a single circular cylinder. These computations were then investigated for the low Mach number simulations. Further validation was performed for flow past two circular cylinders arranged in tandem and side-by-side. Results of these simulations were extensively compared with the previous numerical data. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address a scheduling problem for minimizing total weighted flowtime, observed in automobile gear manufacturing. Specifically, the bottleneck operation of the pre-heat treatment stage of gear manufacturing process has been dealt with in scheduling. Many real-life scenarios like unequal release times, sequence dependent setup times, and machine eligibility restrictions have been considered. A mathematical model taking into account dynamic starting conditions has been proposed. The problem is derived to be NP-hard. To approach the problem, a few heuristic algorithms have been proposed. Based on planned computational experiments, the performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small-size problem instances and (b) in comparison with the estimated optimal solution for large-size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently yielding near-statistically estimated optimal solutions in a reasonable computational time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) have many application scenarios where external clock synchronisation may be required because a WSN may consist of components which are not connected to each other. In this paper, we first propose a novel weighted average-based internal clock synchronisation (WICS) protocol, which synchronises all the clocks of a WSN with the clock of a reference node periodically. Based on this protocol, we then propose our weighted average-based external clock synchronisation (WECS) protocol. We have analysed the proposed protocols for maximum synchronisation error and shown that it is always upper bounded. Extensive simulation studies of the proposed protocols have been carried out using Castalia simulator. Simulation results validate our above theoretical claim and also show that the proposed protocols perform better in comparison to other protocols in terms of synchronisation accuracy. A prototype implementation of the WICS protocol using a few TelosB motes also validates the above conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Savitzky-Golay (S-G) filters are finite impulse response lowpass filters obtained while smoothing data using a local least-squares (LS) polynomial approximation. Savitzky and Golay proved in their hallmark paper that local LS fitting of polynomials and their evaluation at the mid-point of the approximation interval is equivalent to filtering with a fixed impulse response. The problem that we address here is, ``how to choose a pointwise minimum mean squared error (MMSE) S-G filter length or order for smoothing, while preserving the temporal structure of a time-varying signal.'' We solve the bias-variance tradeoff involved in the MMSE optimization using Stein's unbiased risk estimator (SURE). We observe that the 3-dB cutoff frequency of the SURE-optimal S-G filter is higher where the signal varies fast locally, and vice versa, essentially enabling us to suitably trade off the bias and variance, thereby resulting in near-MMSE performance. At low signal-to-noise ratios (SNRs), it is seen that the adaptive filter length algorithm performance improves by incorporating a regularization term in the SURE objective function. We consider the algorithm performance on real-world electrocardiogram (ECG) signals. The results exhibit considerable SNR improvement. Noise performance analysis shows that the proposed algorithms are comparable, and in some cases, better than some standard denoising techniques available in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clock synchronization is an extremely important requirement of wireless sensor networks(WSNs). There are many application scenarios such as weather monitoring and forecasting etc. where external clock synchronization may be required because WSN itself may consists of components which are not connected to each other. A usual approach for external clock synchronization in WSNs is to synchronize the clock of a reference node with an external source such as UTC, and the remaining nodes synchronize with the reference node using an internal clock synchronization protocol. In order to provide highly accurate time, both the offset and the drift rate of each clock with respect to reference node are estimated from time to time, and these are used for getting correct time from local clock reading. A problem with this approach is that it is difficult to estimate the offset of a clock with respect to the reference node when drift rate of clocks varies over a period of time. In this paper, we first propose a novel internal clock synchronization protocol based on weighted averaging technique, which synchronizes all the clocks of a WSN to a reference node periodically. We call this protocol weighted average based internal clock synchronization(WICS) protocol. Based on this protocol, we then propose our weighted average based external clock synchronization(WECS) protocol. We have analyzed the proposed protocols for maximum synchronization error and shown that it is always upper bounded. Extensive simulation studies of the proposed protocols have been carried out using Castalia simulator. Simulation results validate our theoretical claim that the maximum synchronization error is always upper bounded and also show that the proposed protocols perform better in comparison to other protocols in terms of synchronization accuracy. A prototype implementation of the proposed internal clock synchronization protocol using a few TelosB motes also validates our claim.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky- Golay (SG) filtering. Features such as themel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With ever increasing demand for electric energy, additional generation and associated transmission facilities has to be planned and executed. In order to augment existing transmission facilities, proper planning and selective decisions are to be made whereas keeping in mind the interests of several parties who are directly or indirectly involved. Common trend is to plan optimal generation expansion over the planning period in order to meet the projected demand with minimum cost capacity addition along with a pre-specified reliability margin. Generation expansion at certain locations need new transmission network which involves serious problems such as getting right of way, environmental clearance etc. In this study, an approach to the citing of additional generation facilities in a given system with minimum or no expansion in the transmission facility is attempted using the network connectivity and the concept of electrical distance for projected load demand. The proposed approach is suitable for large interconnected systems with multiple utilities. Sample illustration on real life system is presented in order to show how this approach improves the overall performance on the operation of the system with specified performance parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real-time object tracking is a critical task in many computer vision applications. Achieving rapid and robust tracking while handling changes in object pose and size, varying illumination and partial occlusion, is a challenging task given the limited amount of computational resources. In this paper we propose a real-time object tracker in l(1) framework addressing these issues. In the proposed approach, dictionaries containing templates of overlapping object fragments are created. The candidate fragments are sparsely represented in the dictionary fragment space by solving the l(1) regularized least squares problem. The non zero coefficients indicate the relative motion between the target and candidate fragments along with a fidelity measure. The final object motion is obtained by fusing the reliable motion information. The dictionary is updated based on the object likelihood map. The proposed tracking algorithm is tested on various challenging videos and found to outperform earlier approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of modulation schemes for the physical layer network-coded two-way relaying scenario is considered with a protocol which employs two phases: multiple access (MA) phase and broadcast (BC) phase. It was observed by Koike-Akino et al. that adaptively changing the network coding map used at the relay according to the channel conditions greatly reduces the impact of MA interference which occurs at the relay during the MA phase and all these network coding maps should satisfy a requirement called the exclusive law. We show that every network coding map that satisfies the exclusive law is representable by a Latin Square and conversely, that this relationship can be used to get the network coding maps satisfying the exclusive law. The channel fade states for which the minimum distance of the effective constellation at the relay become zero are referred to as the singular fade states. For M - PSK modulation (M any power of 2), it is shown that there are (M-2/4 - M/2 + 1) M singular fade states. Also, it is shown that the constraints which the network coding maps should satisfy so that the harmful effects of the singular fade states are removed, can be viewed equivalently as partially filled Latin Squares (PFLS). The problem of finding all the required maps is reduced to finding a small set of maps for M - PSK constellations (any power of 2), obtained by the completion of PFLS. Even though the completability of M x M PFLS using M symbols is an open problem, specific cases where such a completion is always possible are identified and explicit construction procedures are provided. Having obtained the network coding maps, the set of all possible channel realizations (the complex plane) is quantized into a finite number of regions, with a specific network coding map chosen in a particular region. It is shown that the complex plane can be partitioned into two regions: a region in which any network coding map which satisfies the exclusive law gives the same best performance and a region in which the choice of the network coding map affects the performance. The quantization thus obtained analytically, leads to the same as the one obtained using computer search for M = 4-PSK signal set by Koike-Akino et al., when specialized for Simulation results show that the proposed scheme performs better than the conventional exclusive-OR (XOR) network coding and in some cases outperforms the scheme proposed by Koike-Akino et al.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of temporal envelope modeling for transient audio signals. We propose the Gamma distribution function (GDF) as a suitable candidate for modeling the envelope keeping in view some of its interesting properties such as asymmetry, causality, near-optimal time-bandwidth product, controllability of rise and decay, etc. The problem of finding the parameters of the GDF becomes a nonlinear regression problem. We overcome the hurdle by using a logarithmic envelope fit, which reduces the problem to one of linear regression. The logarithmic transformation also has the feature of dynamic range compression. Since temporal envelopes of audio signals are not uniformly distributed, in order to compute the amplitude, we investigate the importance of various loss functions for regression. Based on synthesized data experiments, wherein we have a ground truth, and real-world signals, we observe that the least-squares technique gives reasonably accurate amplitude estimates compared with other loss functions.