970 resultados para Offer calculation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To evaluate the repeatability and reproducibility of subfoveal choroidal thickness (CT) calculations performed manually using optical coherence tomography (OCT). Methods. The CT was imaged in vivo at each of two visits on 11 healthy volunteers (mean age, 35.72 ± 13.19 years) using the spectral domain OCT. CT was manually measured after applying ImageJ processing filters on 15 radial subfoveal scans. Each radial scan was spaced 12° from each other and contained 2500 A-scans. The coefficient of variability, coefficient of repeatability (CoR), coefficient of reproducibility, and intraclass correlation coefficient determined the reproducibility and repeatability of the calculation. Axial length (AL) and mean spherical equivalent refractive error were measured with the IOLMaster and an open view autorefractor to study their potential relationship with CT. Results. The within-visit and between-visit coefficient of variability, CoR, coefficient of reproducibility, and intraclass correlation coefficient were 0.80, 2.97% 2.44%, and 99%, respectively. The subfoveal CT correlated significantly with AL (R = -0.60, p = 0.05). Conclusions. The subfoveal CT could be measured manually in vivo using OCT and the readings obtained from the healthy subjects evaluated were repeatable and reproducible. It is proposed that OCT could be a useful instrument to perform in vivo assessment and monitoring of CT changes in retinal disease. The preliminary results suggest a negative correlation between subfoveal CT and AL in such a way that it decreases with increasing AL but not with refractive error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate theoretically three previously published formulae that use intra-operative aphakic refractive error to calculate intraocular lens (IOL) power, not necessitating pre-operative biometry. The formulae are as follows: IOL power (D) = Aphakic refraction x 2.01 [Ianchulev et al., J. Cataract Refract. Surg.31 (2005) 1530]; IOL power (D) = Aphakic refraction x 1.75 [Mackool et al., J. Cataract Refract. Surg.32 (2006) 435]; IOL power (D) = 0.07x(2) + 1.27x + 1.22, where x = aphakic refraction [Leccisotti, Graefes Arch. Clin. Exp. Ophthalmol.246 (2008) 729]. METHODS: Gaussian first order calculations were used to determine the relationship between intra-operative aphakic refractive error and the IOL power required for emmetropia in a series of schematic eyes incorporating varying corneal powers, pre-operative crystalline lens powers, axial lengths and post-operative IOL positions. The three previously published formulae, based on empirical data, were then compared in terms of IOL power errors that arose in the same schematic eye variants. RESULTS: An inverse relationship exists between theoretical ratio and axial length. Corneal power and initial lens power have little effect on calculated ratios, whilst final IOL position has a significant impact. None of the three empirically derived formulae are universally accurate but each is able to predict IOL power precisely in certain theoretical scenarios. The formulae derived by Ianchulev et al. and Leccisotti are most accurate for posterior IOL positions, whereas the Mackool et al. formula is most reliable when the IOL is located more anteriorly. CONCLUSION: Final IOL position was found to be the chief determinant of IOL power errors. Although the A-constants of IOLs are known and may be accurate, a variety of factors can still influence the final IOL position and lead to undesirable refractive errors. Optimum results using these novel formulae would be achieved in myopic eyes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The adjuvanticity of liposomes can be directed through formulation to develop a safe yet potent vaccine candidate. With the addition of the cationic lipid dimethyldioctadecylammonium bromide (DDA) to stable neutral distearoylphosphatidylcholine (DSPC):cholesterol (Chol) liposomes, vesicle size reduces while protein entrapment increases. The addition of the immunomodulator, trehalose 6,6-dibehenate (TDB) to either the neutral or cationic liposomes did not affect the physiochemical characteristics of these liposome vesicles. However, the protective immune response, as indicated by the amount of IFN-? production, increases considerably when TDB is present. High levels of IFN-? were observed for cationic liposomes; however, there was a marked reduction in IFN-? release over time. Conversely, for neutral liposomes containing TDB, although the initial amount of IFN-? was slightly lower than the cationic equivalent, the overall protective immune responses of these neutral liposomes were effectively maintained over time, generating good levels of protection. To that end, although the addition of DSPC and Chol reduced the protective immunity of DDA:TDB liposomes, relatively high protection was observed for the neutral counterpart, DSPC:Chol:TDB, which may offer an effective neutral alternative to the DDA:TDB cationic system, especially for the delivery of either zwitterionic (neutral) or cationic molecules or antigens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method is proposed to offer privacy in computer communications, using symmetric product block ciphers. The security protocol involved a cipher negotiation stage, in which two communicating parties select privately a cipher from a public cipher space. The cipher negotiation process includes an on-line cipher evaluation stage, in which the cryptographic strength of the proposed cipher is estimated. The cryptographic strength of the ciphers is measured by confusion and diffusion. A method is proposed to describe quantitatively these two properties. For the calculation of confusion and diffusion a number of parameters are defined, such as the confusion and diffusion matrices and the marginal diffusion. These parameters involve computationally intensive calculations that are performed off-line, before any communication takes place. Once they are calculated, they are used to obtain estimation equations, which are used for on-line, fast evaluation of the confusion and diffusion of the negotiated cipher. A technique proposed in this thesis describes how to calculate the parameters and how to use the results for fast estimation of confusion and diffusion for any cipher instance within the defined cipher space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a theoretical method to calculate jitter statistics of interacting solitons. Applying this approach, we have derived the non-Gaussian probability density function and calculated the bit-error rate as a function of noise level, initial separation and phase difference between solitons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

If a regenerative process is represented as semi-regenerative, we derive formulae enabling us to calculate basic characteristics associated with the first occurrence time starting from corresponding characteristics for the semi-regenerative process. Recursive equations, integral equations, and Monte-Carlo algorithms are proposed for practical solving of the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The papers is dedicated to the questions of modeling and basing super-resolution measuring- calculating systems in the context of the conception “device + PC = new possibilities”. By the authors of the article the new mathematical method of solution of the multi-criteria optimization problems was developed. The method is based on physic-mathematical formalism of reduction of fuzzy disfigured measurements. It is shown, that determinative part is played by mathematical properties of physical models of the object, which is measured, surroundings, measuring components of measuring-calculating systems and theirs cooperation as well as the developed mathematical method of processing and interpretation of measurements problem solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Дойчин Бояджиев, Галена Пеловска - В статията се предлага оптимизиран алгоритъм, който е по-бърз в сравнение с по- рано описаната ускорена (модифицирана STS) диференчна схема за възрастово структуриран популационен модел с дифузия. Запазвайки апроксимацията на модифицирания STS алгоритъм, изчислителното времето се намаля почти два пъти. Това прави оптимизирания метод по-предпочитан за задачи с нелинейност или с по-висока размерност.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article shows the social importance of subsistence minimum in Georgia. The methodology of its calculation is also shown. We propose ways of improving the calculation of subsistence minimum in Georgia and how to extend it for other developing countries. The weights of food and non-food expenditures in the subsistence minimum baskets are essential in these calculations. Daily consumption value of the minimum food basket has been calculated too. The average consumer expenditures on food supply and the other expenditures to the share are considered in dynamics. Our methodology of the subsistence minimum calculation is applied for the case of Georgia. However, it can be used for similar purposes based on data from other developing countries, where social stability is achieved, and social inequalities are to be actualized. ACM Computing Classification System (1998): H.5.3, J.1, J.4, G.3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Leontief input-output model is widely used to determine the ecological footprint of consumption in a region or a country. It is able to capture spillover environmental effects along the supply change, thus its popularity is increasing in ecology related economic research. These studies are static and the dynamic investigations are neglected. The dynamic Leontief model makes it possible to involve the capital and inventory investment in the footprint calculation that projects future growth of GDP and environmental impacts. We show a new calculation method to determine the effect of capital accumulation on ecological footprint. Keywords: Dynamic Leontief model, Dynamic ecological footprint, Environmental management, Allocation method

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Szolvencia II néven említett új irányelv elfogadása az Európai Unióban új helyzetet teremt a biztosítók tőkeszükséglet-számításánál. A tanulmány a biztosítók működését modellezve azt elemzi, hogyan hatnak a biztosítók állományának egyes jellemzői a tőkeszükséglet értékére egy olyan elméleti modellben, amelyben a tőkeszükséglet-értékek a Szolvencia II szabályok alapján számolhatók. A modellben biztosítási illetve pénzügyi kockázati "modul" figyelembevételére kerül sor külön-külön számolással, illetve a két kockázatfajta közös modellben való együttes figyelembevételével (a Szolvencia II eredményekkel való összehasonlításhoz). Az elméleti eredmények alapján megállapítható, hogy a tőkeszükségletre vonatkozóan számolható értékek eltérhetnek e két esetben. Az eredmények alapján lehetőség van az eltérések hátterében álló tényezők tanulmányozására is. ____ The new Solvency II directive results in a new environment for calculating the solvency capital requirement of insurance companies in the European Union. By modelling insurance companies the study analyses the impact of certain characteristics of insurance population on the solvency capital based on Solvency II rules. The model includes insurance and financial risk module by calculating solvency capital for the given risk types separately and together, respectively. Based on the theoretical results the difference between these two approaches can be observed. Based on the results the analysis of factors in°uencing the differences is also possible.