974 resultados para analogy calculation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper describes two new transport layer (TCP) options and an expanded transport layer queuing strategy that facilitate three functions that are fundamental to the dispatching-based clustered service. A transport layer option has been developed to facilitate. the use of client wait time data within the service request processing of the cluster. A second transport layer option has been developed to facilitate the redirection of service requests by the cluster dispatcher to the cluster processing member. An expanded transport layer service request queuing strategy facilitates the trust based filtering of incoming service requests so that a graceful degradation of service delivery may be achieved during periods of overload - most dramatically evidenced by distributed denial of service attacks against the clustered service. We describe how these new options and queues have been implemented and successfully tested within the transport layer of the Linux kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel approach, based on statistical mechanics, to analyze typical performance of optimum code-division multiple-access (CDMA) multiuser detectors is reviewed. A `black-box' view ot the basic CDMA channel is introduced, based on which the CDMA multiuser detection problem is regarded as a `learning-from-examples' problem of the `binary linear perceptron' in the neural network literature. Adopting Bayes framework, analysis of the performance of the optimum CDMA multiuser detectors is reduced to evaluation of the average of the cumulant generating function of a relevant posterior distribution. The evaluation of the average cumulant generating function is done, based on formal analogy with a similar calculation appearing in the spin glass theory in statistical mechanics, by making use of the replica method, a method developed in the spin glass theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To evaluate the repeatability and reproducibility of subfoveal choroidal thickness (CT) calculations performed manually using optical coherence tomography (OCT). Methods. The CT was imaged in vivo at each of two visits on 11 healthy volunteers (mean age, 35.72 ± 13.19 years) using the spectral domain OCT. CT was manually measured after applying ImageJ processing filters on 15 radial subfoveal scans. Each radial scan was spaced 12° from each other and contained 2500 A-scans. The coefficient of variability, coefficient of repeatability (CoR), coefficient of reproducibility, and intraclass correlation coefficient determined the reproducibility and repeatability of the calculation. Axial length (AL) and mean spherical equivalent refractive error were measured with the IOLMaster and an open view autorefractor to study their potential relationship with CT. Results. The within-visit and between-visit coefficient of variability, CoR, coefficient of reproducibility, and intraclass correlation coefficient were 0.80, 2.97% 2.44%, and 99%, respectively. The subfoveal CT correlated significantly with AL (R = -0.60, p = 0.05). Conclusions. The subfoveal CT could be measured manually in vivo using OCT and the readings obtained from the healthy subjects evaluated were repeatable and reproducible. It is proposed that OCT could be a useful instrument to perform in vivo assessment and monitoring of CT changes in retinal disease. The preliminary results suggest a negative correlation between subfoveal CT and AL in such a way that it decreases with increasing AL but not with refractive error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate theoretically three previously published formulae that use intra-operative aphakic refractive error to calculate intraocular lens (IOL) power, not necessitating pre-operative biometry. The formulae are as follows: IOL power (D) = Aphakic refraction x 2.01 [Ianchulev et al., J. Cataract Refract. Surg.31 (2005) 1530]; IOL power (D) = Aphakic refraction x 1.75 [Mackool et al., J. Cataract Refract. Surg.32 (2006) 435]; IOL power (D) = 0.07x(2) + 1.27x + 1.22, where x = aphakic refraction [Leccisotti, Graefes Arch. Clin. Exp. Ophthalmol.246 (2008) 729]. METHODS: Gaussian first order calculations were used to determine the relationship between intra-operative aphakic refractive error and the IOL power required for emmetropia in a series of schematic eyes incorporating varying corneal powers, pre-operative crystalline lens powers, axial lengths and post-operative IOL positions. The three previously published formulae, based on empirical data, were then compared in terms of IOL power errors that arose in the same schematic eye variants. RESULTS: An inverse relationship exists between theoretical ratio and axial length. Corneal power and initial lens power have little effect on calculated ratios, whilst final IOL position has a significant impact. None of the three empirically derived formulae are universally accurate but each is able to predict IOL power precisely in certain theoretical scenarios. The formulae derived by Ianchulev et al. and Leccisotti are most accurate for posterior IOL positions, whereas the Mackool et al. formula is most reliable when the IOL is located more anteriorly. CONCLUSION: Final IOL position was found to be the chief determinant of IOL power errors. Although the A-constants of IOLs are known and may be accurate, a variety of factors can still influence the final IOL position and lead to undesirable refractive errors. Optimum results using these novel formulae would be achieved in myopic eyes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The English writing system is notoriously irregular in its orthography at the phonemic level. It was therefore proposed that focusing beginner-spellers’ attention on sound-letter relations at the sub-syllabic level might improve spelling performance. This hypothesis was tested in Experiments 1 and 2 using a ‘clue word’ paradigm to investigate the effect of analogy teaching intervention / non-intervention on the spelling performance of an experimental group and controls. The results overall showed the intervention to be effective in improving spelling, and this effect to be enduring. Experiment 3 demonstrated a greater application of analogy in spelling, when clue words, which participants used in analogy to spell test words, remained in view during testing. A series of regression analyses, with spelling entered as the criterion variable and age, analogy and phonological plausibility (PP) as predictors, showed both analogy and PP to be highly predictive of spelling. Experiment 4 showed that children could use analogy to improve their spelling, even without intervention, by comparing their performance in spelling words presented in analogous categories or in random lists. Consideration of children’s patterns of analogy use at different points of development showed three age groups to use similar patterns of analogy, but contrasting analogy patterns for spelling different words. This challenges stage theories of analogy use in literacy. Overall the most salient units used in analogy were the rime and, to a slightly lesser degree, the onset-vowel and vowel. Finally, Experiment 5 showed analogy and phonology to be fairly equally influential in spelling, but analogy to be more influential than phonology in reading. Five separate experiments therefore found analogy to be highly influential in spelling. Experiment 5 also considered the role of memory and attention in literacy attainment. The important implications of this research are that analogy, rather than purely phonics-based strategy, is instrumental in correct spelling in English.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a theoretical method to calculate jitter statistics of interacting solitons. Applying this approach, we have derived the non-Gaussian probability density function and calculated the bit-error rate as a function of noise level, initial separation and phase difference between solitons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A set of 38 epitopes and 183 non-epitopes, which bind to alleles of the HLA-A3 supertype, was subjected to a combination of comparative molecular similarity indices analysis (CoMSIA) and soft independent modeling of class analogy (SIMCA). During the process of T cell recognition, T cell receptors (TCR) interact with the central section of the bound nonamer peptide; thus only positions 4−8 were considered in the study. The derived model distinguished 82% of the epitopes and 73% of the non-epitopes after cross-validation in five groups. The overall preference from the model is for polar amino acids with high electron density and the ability to form hydrogen bonds. These so-called “aggressive” amino acids are flanked by small-sized residues, which enable such residues to protrude from the binding cleft and take an active role in TCR-mediated T cell recognition. Combinations of “aggressive” and “passive” amino acids in the middle part of epitopes constitute a putative TCR binding motif

Relevância:

20.00% 20.00%

Publicador:

Resumo:

If a regenerative process is represented as semi-regenerative, we derive formulae enabling us to calculate basic characteristics associated with the first occurrence time starting from corresponding characteristics for the semi-regenerative process. Recursive equations, integral equations, and Monte-Carlo algorithms are proposed for practical solving of the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose is to develop expert systems where by-analogy reasoning is used. Knowledge “closeness” problems are known to frequently emerge in such systems if knowledge is represented by different production rules. To determine a degree of closeness for production rules a distance between predicates is introduced. Different types of distances between two predicate value distribution functions are considered when predicates are “true”. Asymptotic features and interrelations of distances are studied. Predicate value distribution functions are found by empirical distribution functions, and a procedure is proposed for this purpose. An adequacy of obtained distribution functions is tested on the basis of the statistical 2 χ –criterion and a testing mechanism is discussed. A theorem, by which a simple procedure of measurement of Euclidean distances between distribution function parameters is substituted for a predicate closeness determination one, is proved for parametric distribution function families. The proposed distance measurement apparatus may be applied in expert systems when reasoning is created by analogy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Development of methods and tools for modeling human reasoning (common sense reasoning) by analogy in intelligent decision support systems is considered. Special attention is drawn to modeling reasoning by structural analogy taking the context into account. The possibility of estimating the obtained analogies taking into account the context is studied. This work was supported by RFBR.