988 resultados para Hartree Fock scheme correlation errors


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlation dimension D 2 and correlation entropy K 2 are both important quantifiers in nonlinear time series analysis. However, use of D 2 has been more common compared to K 2 as a discriminating measure. One reason for this is that D 2 is a static measure and can be easily evaluated from a time series. However, in many cases, especially those involving coloured noise, K 2 is regarded as a more useful measure. Here we present an efficient algorithmic scheme to compute K 2 directly from a time series data and show that K 2 can be used as a more effective measure compared to D 2 for analysing practical time series involving coloured noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a linear quantile regression analysis method for longitudinal data that combines the between- and within-subject estimating functions, which incorporates the correlations between repeated measurements. Therefore, the proposed method results in more efficient parameter estimation relative to the estimating functions based on an independence working model. To reduce computational burdens, the induced smoothing method is introduced to obtain parameter estimates and their variances. Under some regularity conditions, the estimators derived by the induced smoothing method are consistent and have asymptotically normal distributions. A number of simulation studies are carried out to evaluate the performance of the proposed method. The results indicate that the efficiency gain for the proposed method is substantial especially when strong within correlations exist. Finally, a dataset from the audiology growth research is used to illustrate the proposed methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The method of generalized estimating equation-, (GEEs) has been criticized recently for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. However, the feasibility and efficiency of GEE methods can be enhanced considerably by using flexible families of working correlation models. We propose two ways of constructing unbiased estimating equations from general correlation models for irregularly timed repeated measures to supplement and enhance GEE. The supplementary estimating equations are obtained by differentiation of the Cholesky decomposition of the working correlation, or as score equations for decoupled Gaussian pseudolikelihood. The estimating equations are solved with computational effort equivalent to that required for a first-order GEE. Full details and analytic expressions are developed for a generalized Markovian model that was evaluated through simulation. Large-sample ".sandwich" standard errors for working correlation parameter estimates are derived and shown to have good performance. The proposed estimating functions are further illustrated in an analysis of repeated measures of pulmonary function in children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, the Finnish agricultural sector has undergone rapid structural changes. The number of farms has decreased and the average farm size has increased when the number of farms transferred to new entrants has decreased. Part of the structural change in agriculture is manifested in early retirement programmes. In studying farmers exit behaviour in different countries, institutional differences, incentive programmes and constraints are found to matter. In Finland, farmers early retirement programmes were first introduced in 1974 and, during the last ten years, they have been carried out within the European Union framework for these programmes. The early retirement benefits are farmer specific and de-pend on the level of pension insurance the farmer has paid over his active farming years. In order to predict the future development of the agricultural sector, farmers have been frequently asked about their future plans and their plans for succession. However, the plans the farmers made for succession have been found to be time inconsistent. This study estimates the value of farmers stated succession plans in predicting revealed succession decisions. A stated succession plan exists when a farmer answers in a survey questionnaire that the farm is going to be transferred to a new entrant within a five-year period. The succession is revealed when the farm is transferred to a suc-cessor. Stated and revealed behaviour was estimated as a recursive Binomial Probit Model, which accounts for the censoring of the decision variables and controls for a potential correlation between the two equations. The results suggest that the succession plans, as stated by elderly farmers in the questionnaires, do not provide information that is significant and valuable in predicting true, com-pleted successions. Therefore, farmer exit should be analysed based on observed behaviour rather than on stated plans and intentions. As farm retirement plays a crucial role in determining the characteristics of structural change in agriculture, it is important to establish the factors which determine an exit from farming among eld-erly farmers and how off-farm income and income losses affect their exit choices. In this study, the observed choice of pension scheme by elderly farmers was analysed by a bivariate probit model. Despite some variations in significance and the effects of each factor, the ages of the farmer and spouse, the age and number of potential successors, farm size, income loss when retiring and the location of the farm together with the production line were found to be the most important determi-nants of early retirement and the transfer or closure of farms. Recently, the labour status of the spouse has been found to contribute significantly to individual retirement decisions. In this study, the effect of spousal retirement and economic incentives related to the timing of a farming couple s early retirement decision were analysed with a duration model. The results suggest that an expected pension in particular advances farm transfers. It was found that on farms operated by a couple, both early retirement and farm succession took place more often than on farms operated by a single person. However, the existence of a spouse delayed the timing of early retirement. Farming couples were found to co-ordinate their early retirement decisions when they both exit through agricultural retirement programmes, but such a co-ordination did not exist when one of the spouses retired under other pension schemes. Besides changes in the agricultural structure, the share and amount of off-farm income of a farm family s total income has also increased. In the study, the effect of off-farm income on farmers retirement decisions, in addition to other financial factors, was analysed. The unknown parameters were first estimated by a switching-type multivariate probit model and then by the simulated maxi-mum likelihood (SML) method, controlling for farmer specific fixed effects and serial correlation of the errors. The results suggest that elderly farmers off-farm income is a significant determinant in a farmer s choice to exit and close down the farm. However, off-farm income only has a short term effect on structural changes in agriculture since it does not significantly contribute to the timing of farm successions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For studying systems containing nitrogen, limited use of N-14 NMR spectroscopy has been made because of the large quadrupolar interaction experienced by the N-14 nucleus and the absence of a central transition. To overcome the above problem, use of overtone spectroscopy has been suggested. Though this approach has limited applicability for powder samples due to second order quadrupole broadening, it is useful for studying oriented samples and single crystals. Here, we demonstrate the use of the recently proposed dipolar assisted polarization transfer (DAPT) pulse scheme for exciting the overtone transitions. The pulse sequence may also be utilized as a two-dimensional experiment to obtain H-1-N-14 dipolar couplings and H-1 chemical shifts. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper optical code-division multiple-access (O-CDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of one-dimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver and chip-level receiver are considered in the analysis. Using analytical model, we compute packet-success probability, throughput and compare for OOC and SPR codes in an O-CDMA network and the analysis shows improved performance with SPR codes as compared to OOC codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper optical code-division multiple-access (O-CDMA) packet network is considered. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and twodimensional (2-D) wavelength/time single-pulse-per-row (W/TSPR) codes are analyzed. The main advantage of using 2-D codes instead of one-dimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver is considered in the analysis. Using analytical model, we compute and compare packet-success probability for 1-D and 2-D codes in an O-CDMA network and the analysis shows improved performance with 2-D codes as compared to 1-D codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The simulation of complex chemical systems often requires a multi-level description, in which a region of special interest is treated using a computationally expensive quantum mechanical (QM) model while its environment is described by a faster, simpler molecular mechanical (MM) model. Furthermore, studying dynamic effects in solvated systems or bio-molecules requires a variable definition of the two regions, so that atoms or molecules can be dynamically re-assigned between the QM and MM descriptions during the course of the simulation. Such reassignments pose a problem for traditional QM/MM schemes by exacerbating the errors that stem from switching the model at the boundary. Here we show that stable, long adaptive simulations can be carried out using density functional theory with the BLYP exchange-correlation functional for the QM model and a flexible TIP3P force field for the MM model without requiring adjustments of either. Using a primary benchmark system of pure water, we investigate the convergence of the liquid structure with the size of the QM region, and demonstrate that by using a sufficiently large QM region (with radius 6 Å) it is possible to obtain radial and angular distributions that, in the QM region, match the results of fully quantum mechanical calculations with periodic boundary conditions, and, after a smooth transition, also agree with fully MM calculations in the MM region. The key ingredient is the accurate evaluation of forces in the QM subsystem which we achieve by including an extended buffer region in the QM calculations. We also show that our buffered-force QM/MM scheme is transferable by simulating the solvated Cl(-) ion.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of this paper we show that a new technique exploiting 1D correlation of 2D or even 1D patches between successive frames may be sufficient to compute a satisfactory estimation of the optical flow field. The algorithm is well-suited to VLSI implementations. The sparse measurements provided by the technique can be used to compute qualitative properties of the flow for a number of different visual tsks. In particular, the second part of the paper shows how to combine our 1D correlation technique with a scheme for detecting expansion or rotation ([5]) in a simple algorithm which also suggests interesting biological implications. The algorithm provides a rough estimate of time-to-crash. It was tested on real image sequences. We show its performance and compare the results to previous approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Practical realisation of quantum information science is a challenge being addressed by researchers employing various technologies. One of them is based on quantum dots (QD), usually referred to as artificial atoms. Being capable to emit single and polarization entangled photons, they are attractive as sources of quantum bits (qubits) which can be relatively easily integrated into photonic circuits using conventional semiconductor technologies. However, the dominant self-assembled QD systems suffer from asymmetry related problems which modify the energetic structure. The main issue is the degeneracy lifting (the fine-structure splitting, FSS) of an optically allowed neutral exciton state which participates in a polarization-entanglement realisation scheme. The FSS complicates polarization-entanglement detection unless a particular FSS manipulation technique is utilized to reduce it to vanishing values, or a careful selection of intrinsically good candidates from the vast number of QDs is carried out, preventing the possibility of constructing vast arrays of emitters on the same sample. In this work, site-controlled InGaAs QDs grown on (111)B oriented GaAs substrates prepatterned with 7.5 μm pitch tetrahedrons were studied in order to overcome QD asymmetry related problems. By exploiting an intrinsically high rotational symmetry, pyramidal QDs were shown as polarization-entangled photon sources emitting photons with the fidelity of the expected maximally entangled state as high as 0.721. It is the first site-controlled QD system of entangled photon emitters. Moreover, the density of such emitters was found to be as high as 15% in some areas: the density much higher than in any other QD system. The associated physical phenomena (e.g., carrier dynamic, QD energetic structure) were studied, as well, by different techniques: photon correlation spectroscopy, polarization-resolved microphotoluminescence and magneto-photoluminescence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel data-delivery method for delay-sensitive traffic that significantly reduces the energy consumption in wireless sensor networks without reducing the number of packets that meet end-to-end real-time deadlines. The proposed method, referred to as SensiQoS, leverages the spatial and temporal correlation between the data generated by events in a sensor network and realizes energy savings through application-specific in-network aggregation of the data. SensiQoS maximizes energy savings by adaptively waiting for packets from upstream nodes to perform in-network processing without missing the real-time deadline for the data packets. SensiQoS is a distributed packet scheduling scheme, where nodes make localized decisions on when to schedule a packet for transmission to meet its end-to-end real-time deadline and to which neighbor they should forward the packet to save energy. We also present a localized algorithm for nodes to adapt to network traffic to maximize energy savings in the network. Simulation results show that SensiQoS improves the energy savings in sensor networks where events are sensed by multiple nodes, and spatial and/or temporal correlation exists among the data packets. Energy savings due to SensiQoS increase with increase in the density of the sensor nodes and the size of the sensed events. © 2010 Harshavardhan Sabbineni and Krishnendu Chakrabarty.