982 resultados para Predicted genotypic values
Resumo:
AIM: To estimate the prevalence of primary angle closure glaucoma (PACG) in European derived populations.
METHOD: Systematic review and modelling of PACG prevalence data from population studies. PACG was defined according to the ISGEO definition requiring structural and/or functional evidence of glaucomatous optic neuropathy. Prevalence estimates were applied to the 2010 United Nations projected population figures to estimate case numbers.
RESULTS: The prevalence of PACG in those 40 years or more is 0.4% (95% CI 0.3% to 0.5%). Age-specific prevalence values are 0.02% (CI 0.00 to 0.08) for those 40-49 years, 0.60% (0.27 to 1.00) for those 50-59 years, 0.20% (0.06 to 0.42) for those 60-69 years and 0.94% (0.63 to 1.35) for those 70 years and older. Three-quarters of all cases occur in female subjects (3.25 female to 1 male; CI 1.76 to 5.94).
CONCLUSION: This analysis provides a current evidence-based estimate of PACG prevalence in European derived populations and suggests there are 130,000 people in the UK, 1.60 million people in Europe and 581,000 people in the USA with PACG today. Accounting for ageing population structures, cases are predicted to increase by 19% in the UK, 9% in Europe and 18% in the USA within the next decade. PACG is more common than previously thought, and all primary glaucoma cases should be considered to be PACG until the anterior chamber angle is shown to be open on gonioscopy.
Resumo:
Objectives: To assess whether open angle glaucoma (OAG) screening meets the UK National Screening Committee criteria, to compare screening strategies with case finding, to estimate test parameters, to model estimates of cost and cost-effectiveness, and to identify areas for future research. Data sources: Major electronic databases were searched up to December 2005. Review methods: Screening strategies were developed by wide consultation. Markov submodels were developed to represent screening strategies. Parameter estimates were determined by systematic reviews of epidemiology, economic evaluations of screening, and effectiveness (test accuracy, screening and treatment). Tailored highly sensitive electronic searches were undertaken. Results: Most potential screening tests reviewed had an estimated specificity of 85% or higher. No test was clearly most accurate, with only a few, heterogeneous studies for each test. No randomised controlled trials (RCTs) of screening were identified. Based on two treatment RCTs, early treatment reduces the risk of progression. Extrapolating from this, and assuming accelerated progression with advancing disease severity, without treatment the mean time to blindness in at least one eye was approximately 23 years, compared to 35 years with treatment. Prevalence would have to be about 3-4% in 40 year olds with a screening interval of 10 years to approach cost-effectiveness. It is predicted that screening might be cost-effective in a 50-year-old cohort at a prevalence of 4% with a 10-year screening interval. General population screening at any age, thus, appears not to be cost-effective. Selective screening of groups with higher prevalence (family history, black ethnicity) might be worthwhile, although this would only cover 6% of the population. Extension to include other at-risk cohorts (e.g. myopia and diabetes) would include 37% of the general population, but the prevalence is then too low for screening to be considered cost-effective. Screening using a test with initial automated classification followed by assessment by a specialised optometrist, for test positives, was more cost-effective than initial specialised optometric assessment. The cost-effectiveness of the screening programme was highly sensitive to the perspective on costs (NHS or societal). In the base-case model, the NHS costs of visual impairment were estimated as £669. If annual societal costs were £8800, then screening might be considered cost-effective for a 40-year-old cohort with 1% OAG prevalence assuming a willingness to pay of £30,000 per quality-adjusted life-year. Of lesser importance were changes to estimates of attendance for sight tests, incidence of OAG, rate of progression and utility values for each stage of OAG severity. Cost-effectiveness was not particularly sensitive to the accuracy of screening tests within the ranges observed. However, a highly specific test is required to reduce large numbers of false-positive referrals. The findings that population screening is unlikely to be cost-effective are based on an economic model whose parameter estimates have considerable uncertainty, in particular, if rate of progression and/or costs of visual impairment are higher than estimated then screening could be cost-effective. Conclusions: While population screening is not cost-effective, the targeted screening of high-risk groups may be. Procedures for identifying those at risk, for quality assuring the programme, as well as adequate service provision for those screened positive would all be needed. Glaucoma detection can be improved by increasing attendance for eye examination, and improving the performance of current testing by either refining practice or adding in a technology-based first assessment, the latter being the more cost-effective option. This has implications for any future organisational changes in community eye-care services. Further research should aim to develop and provide quality data to populate the economic model, by conducting a feasibility study of interventions to improve detection, by obtaining further data on costs of blindness, risk of progression and health outcomes, and by conducting an RCT of interventions to improve the uptake of glaucoma testing. © Queen's Printer and Controller of HMSO 2007. All rights reserved.
Resumo:
Purpose: The authors sought to quantify neighboring and distant interpoint correlations of threshold values within the visual field in patients with glaucoma. Methods: Visual fields of patients with confirmed or suspected glaucoma were analyzed (n = 255). One eye per patient was included. Patients were examined using the 32 program of the Octopus 1-2-3. Linear regression analysis among each of the locations and the rest of the points of the visual field was performed, and the correlation coefficient was calculated. The degree of correlation was categorized as high (r > 0.66), moderate (0.66 = r > 0.33), or low (r = 0.33). The standard error of threshold estimation was calculated. Results: Most locations of the visual field had high and moderate correlations with neighboring points and with distant locations corresponding to the same nerve fiber bundle. Locations of the visual field had low correlations with those of the opposite hemifield, with the exception of locations temporal to the blind spot. The standard error of threshold estimation increased from 0.6 to 0.9 dB with an r reduction of 0.1. Conclusion: Locations of the visual field have highest interpoint correlation with neighboring points and with distant points in areas corresponding to the distribution of the retinal nerve fiber layer. The quantification of interpoint correlations may be useful in the design and interpretation of visual field tests in patients with glaucoma.
Resumo:
We predicted that the probability of egg occurrence of salamander Salamandrina perspicillata depended on stream features and predation by native crayfish Austropotamobius fulcisianus and the introduced trout Salmo trutta. We assessed the presence of S. perspicillata at 54 sites within a natural reserve of southern Tuscany, Italy. Generalized linear models with binomial errors were constructed using egg presence/absence and altitude, stream mean size and slope, electrical conductivity, water pH and temperature, and a predation factor, defined according to the presence/absence of crayfish and trout. Some competing models also included an autocovariate term, which estimated how much the response variable at any one sampling point reflected response values at surrounding points. The resulting models were compared using Akaike's information criterion. Model selection led to a subset of 14 models with Delta AIC(c) <7 (i.e., models ranging from substantial support to considerably less support), and all but one of these included an effect of predation. Models with the autocovariate term had considerably more support than those without the term. According to multimodel inference, the presence of trout and crayfish reduced the probability of egg occurrence from a mean level of 0.90 (SE limits: 0.98-0.55) to 0.12 (SE limits: 0.34-0.04). The presence of crayfish alone had no detectable effects (SE limits: 0.86-0.39). The results suggest that introduced trout have a detrimental effect on the reproductive output of S. perspicillata and confirm the fundamental importance of distinguishing the roles of endogenous and exogenous forces that act on population distribution.
Resumo:
Triose phosphate isomerase (TPI) catalyses the interconversion of dihydroxyacetone phosphate and glyceraldehyde 3-phosphate, a reaction in the glycolytic pathway. TPI from the common liver fluke, Fasciola hepatica, has been cloned, sequenced and recombinantly expressed in Escherichia coli. The protein has a monomeric molecular mass of approximately 28 kDa. Crosslinking and gel filtration experiments demonstrated that the enzyme exists predominantly as a dimer in solution. F. hepatica TPI is predicted to have a ß-barrel structure and key active site residues (Lys-14, His-95 and Glu-165) are conserved. The enzyme shows remarkable stability to both proteolytic degradation and thermal denaturation. The melting temperature, estimated by thermal scanning fluorimetry, was 67 °C and this temperature was increased in the presence of either dihydroxyacetone phosphate or glyceraldehyde 3-phosphate. Kinetic studies showed that F. hepatica TPI demonstrates Michaelis-Menten kinetics in both directions, with Km values for dihydroxyacetone phosphate and glyceraldehyde 3-phosphate of 2.3 mM and 0.66 mM respectively. Turnover numbers were estimated at 25,000 s(-1) for the conversion of dihydroxyacetone phosphate and 1900 s(-1) for the conversion of glyceraldehyde 3-phosphate. Phosphoenolpyruvate acts as a weak inhibitor of the enzyme. F. hepatica TPI has many features in common with mammalian TPI enzymes (e.g. ß-barrel structure, homodimeric nature, high stability and rapid kinetic turnover). Nevertheless, recent successful identification of specific inhibitors of TPI from other parasites, suggests that small differences in structure and biochemical properties could be exploited in the development of novel, species-specific inhibitors.
Resumo:
In this paper, a novel method for modelling a scaled vehicle–barrier crash test similar to the 20◦ angled barrier test specified in EN 1317 is reported. The intended application is for proof-of-concept evaluation of novel roadside barrier designs, and as a cost-effective precursor to full-scale testing or detailed computational modelling. The method is based on the combination of the conservation of energy law and the equation of motion of a spring mass system representing the impact, and shows, for the first time, the feasibility of applying classical scaling theories to evaluation of roadside barrier design. The scaling method is used to set the initial velocity of the vehicle in the scaled test and to provide scaling factors to convert the measured vehicle accelerations in the scaled test to predicted full-scale accelerations. These values can then be used to calculate the Acceleration Severity Index score of the barrier for a full-scale test. The theoretical validity of the method is demonstrated by comparison to numerical simulations of scaled and full-scale angled barrier impacts using multibody analysis implemented in the crash simulation software MADYMO. Results show a maximum error of 0.3% ascribable to the scaling method.
Resumo:
Aims. In a recent measurement, Meléndez & Barbuy (2009, A&A, 497, 611) report accurate log gf values for 142 important astrophysical lines with wavelengths in the range 4000 Å to 8000 Å. Their results include both solar and laboratory measurements. In this paper, we describe a theoretical study of these lines. Methods. The CIV3 structure codes, combined with our "fine-tuning" extrapolation process, are used to undertake a large-scale CI calculation involving the lowest 262 fine-structure levels belonging to the 3d4s, 3d, 3d4s, 3d4p, and 3d4s4p configurations. Results. We find that many of the 142 transitions are very weak intercombination lines. Other transitions are weak because the dominant configurations in the two levels differ by two orbitals. Conclusions. The comparison between our log gf values and the experimental values generally shows good agreement for most of these transitions, with our theoretical values agreeing slightly more closely with the solar than with the laboratory measurements. A detailed analysis of the small number of transitions for which the agreement between theory and experiment is not as good shows that such disagreements largely arise from severe cancellation due to CI mixing.
Resumo:
This paper considers the use of non-economic considerations in Article 101(3) analysis of industrial restructuring agreements, using the Commission's Decisions in Synthetic Fibres, Stichting Baksteen, and the recent UK Dairy Initiative as examples. I argue that contra to the Commission's recent economics-based approach; there is room for non-economic considerations to be taken into account within the framework of the European Treaties. The competition law issue is whether the provisions of Article 101(3) can save such agreements.
I further argue that there is legal room for non-economic considerations to be considered in evaluating these restructuring agreements, it is not clear who the appropriate arbiter of these considerations should be given the institutional limitations of courts (which have no democratic mandate), specialised competition agencies (which may be too technocratic in focus) and legislatures (which are susceptible to capture by rent-seeking interest groups).
Resumo:
Objective: The aim of this research is to use finite element analysis (FEA) to quantify the effect of the sample shape and the imperfections induced during the manufacturing process of samples on the bond strength and modes of failure of dental adhesive systems through microtensile test. Using the FEA prediction for individual parameters effect, estimation of expected variation and spread of the microtensile bond strength results for different sample geometries is made. Methods: The estimated stress distributions for three different sample shapes, hourglass, stick and dumbbell predicted by FEA are used to predict the strength for different fracture modes. Parameters such as the adhesive thickness, uneven interface of the adhesive and composite and dentin, misalignment of axis of loading, the existence of flaws such as induced cracks during shaping the samples or bubbles created during application of the adhesive are considered. Microtensile experiments are performed simultaneously to measure bond strength and modes of failure. These are compared with the FEA results. Results: The relative bonding strength and its standard deviation for the specimens with different geometries measured through the microtensile tests confirm the findings of the FEA. The hourglass shape samples show lower tensile bond strength and standard deviation compared to the stick and dumbbell shape samples. ANOVA analysis confirms no significant difference between dumbbell and stick geometry results, and major differences of these two geometries compared to hourglass shape measured values. Induced flaws in the adhesive and misalignment of the angle of application of load have significant effect on the microtensile bond strength. Using adhesive with higher modulus the differences between the bond strength of the three sample geometries increase. Significance: The result of the research clarifies the importance of the sample geometry chosen in measuring the bond strength. It quantifies the effect of the imperfections on the bond strength for each of the sample geometries through a systematic and all embracing study. The results explain the reasons of the large spread of the microtensile test results reported by various researchers working in different labs and the need for standardization of the test method and sample shape used in evaluation of the dentin-adhesive bonding system. © 2007 Academy of Dental Materials.
Resumo:
Drilling of Ti6Al4V is investigated experimentally and numerically. A 3D finite element model developed based on Lagrangian approach using commercial finite element software ABAQUS/explicit. 3D complex drill geometry is included in the model. The drilling process simulations are performed at the combinations of three cutting speed and four feed rates. The effects of cutting parameters on the induced thrust force and torque are predicted by the developed model. For validation purpose, experimental trials have been performed in similar condition to the simulations. The forces and torques measured during experiment are compared to the results of the finite element analysis. The agreement of the experimental results for force and torque values with the FE results is very good. Moreover, surface roughness of the holes was measured for mapping of machining. Copyright © 2013 Inderscience Enterprises Ltd.
Resumo:
The OSCAR test, a clinical device that uses counterphase flicker photometry, is believed to be sensitive to the relative numbers of long-wavelength and middle-wavelength cones in the retina, as well as to individual variations in the spectral positions of the photopigments. As part of a population study of individual variations in perception, we obtained OSCAR settings from 1058 participants. We report the distribution characteristics for this cohort. A randomly selected subset of participants was tested twice at an interval of at least one week: the test-retest reliability (Spearman's rho) was 0.80. In a whole-genome association analysis we found a provisional association with a single nucleotide polymorphism (rs16844995). This marker is close to the gene RXRG, which encodes a nuclear receptor, retinoid X receptor γ. This nuclear receptor is already known to have a role in the differentiation of cones during the development of the eye, and we suggest that polymorphisms in or close to RXRG influence the relative probability with which long-wave and middle-wave opsin genes are expressed in human cones.
Resumo:
We present Maxwellian-averaged effective collision strengths for the electron-impact excitation of S III over a wide range of electron temperatures of astrophysical importance, log Te (K) = 3.0-6.0. The calculation incorporates 53 fine-structure levels arising from the six configurations—3s 23p 2, 3s3p 3, 3s 23p3d, 3s 23p4s, 3s 23p4p, and 3s 23p4d—giving rise to 1378 individual lines and is undertaken using the recently developed RMATRX II plus FINE95 suite of codes. A detailed comparison is made with a previous R-matrix calculation and significant differences are found for some transitions. The atomic data are subsequently incorporated into the modeling code CLOUDY to generate line intensities for a range of plasma parameters, with emphasis on allowed ultraviolet extreme-ultraviolet emission lines detected from the Io plasma torus. Electron density-sensitive line ratios are calculated with the present atomic data and compared with those from CHIANTI v7.1, as well as with Io plasma torus spectra obtained by Far-Ultraviolet Spectroscopic Explorer and Extreme-Ultraviolet Explorer. The present line intensities are found to agree well with the observational results and provide a noticeable improvement on the values predicted by CHIANTI.
Resumo:
The environmental quality of land can be assessed by calculating relevant threshold values, which differentiate between concentrations of elements resulting from geogenic and diffuse anthropogenic sources and concentrations generated by point sources of elements. A simple process allowing the calculation of these typical threshold values (TTVs) was applied across a region of highly complex geology (Northern Ireland) to six elements of interest; arsenic, chromium, copper, lead, nickel and vanadium. Three methods for identifying domains (areas where a readily identifiable factor can be shown to control the concentration of an element) were used: k-means cluster analysis, boxplots and empirical cumulative distribution functions (ECDF). The ECDF method was most efficient at determining areas of both elevated and reduced concentrations and was used to identify domains in this investigation. Two statistical methods for calculating normal background concentrations (NBCs) and upper limits of geochemical baseline variation (ULBLs), currently used in conjunction with legislative regimes in the UK and Finland respectively, were applied within each domain. The NBC methodology was constructed to run within a specific legislative framework, and its use on this soil geochemical data set was influenced by the presence of skewed distributions and outliers. In contrast, the ULBL methodology was found to calculate more appropriate TTVs that were generally more conservative than the NBCs. TTVs indicate what a "typical" concentration of an element would be within a defined geographical area and should be considered alongside the risk that each of the elements pose in these areas to determine potential risk to receptors.