901 resultados para Measures of Contradiction
Resumo:
Purpose: To determine whether the ‘through-focus’ aberrations of a multifocal and accommodative intraocular lens (IOL) implanted patient can be used to provide rapid and reliable measures of their subjective range of clear vision. Methods: Eyes that had been implanted with a concentric (n = 8), segmented (n = 10) or accommodating (n = 6) intraocular lenses (mean age 62.9 ± 8.9 years; range 46-79 years) for over a year underwent simultaneous monocular subjective (electronic logMAR test chart at 4m with letters randomised between presentations) and objective (Aston open-field aberrometer) defocus curve testing for levels of defocus between +1.50 to -5.00DS in -0.50DS steps, in a randomised order. Pupil size and ocular aberration (a combination of the patient’s and the defocus inducing lens aberrations) at each level of blur was measured by the aberrometer. Visual acuity was measured subjectively at each level of defocus to determine the traditional defocus curve. Objective acuity was predicted using image quality metrics. Results: The range of clear focus differed between the three IOL types (F=15.506, P=0.001) as well as between subjective and objective defocus curves (F=6.685, p=0.049). There was no statistically significant difference between subjective and objective defocus curves in the segmented or concentric ring MIOL group (P>0.05). However a difference was found between the two measures and the accommodating IOL group (P<0.001). Mean Delta logMAR (predicted minus measured logMAR) across all target vergences was -0.06 ± 0.19 logMAR. Predicted logMAR defocus curves for the multifocal IOLs did not show a near vision addition peak, unlike the subjective measurement of visual acuity. However, there was a strong positive correlation between measured and predicted logMAR for all three IOLs (Pearson’s correlation: P<0.001). Conclusions: Current subjective procedures are lengthy and do not enable important additional measures such as defocus curves under differently luminance or contrast levels to be assessed, which may limit our understanding of MIOL performance in real-world conditions. In general objective aberrometry measures correlated well with the subjective assessment indicating the relative robustness of this technique in evaluating post-operative success with segmented and concentric ring MIOL.
Resumo:
For the past several years, U.S. colleges and universities have faced increased pressure to improve retention and graduation rates. At the same time, educational institutions have placed a greater emphasis on the importance of enrolling more students in STEM (science, technology, engineering and mathematics) programs and producing more STEM graduates. The resulting problem faced by educators involves finding new ways to support the success of STEM majors, regardless of their pre-college academic preparation. The purpose of my research study involved utilizing first-year STEM majors’ math SAT scores, unweighted high school GPA, math placement test scores, and the highest level of math taken in high school to develop models for predicting those who were likely to pass their first math and science courses. In doing so, the study aimed to provide a strategy to address the challenge of improving the passing rates of those first-year students attempting STEM-related courses. The study sample included 1018 first-year STEM majors who had entered the same large, public, urban, Hispanic-serving, research university in the Southeastern U.S. between 2010 and 2012. The research design involved the use of hierarchical logistic regression to determine the significance of utilizing the four independent variables to develop models for predicting success in math and science. The resulting data indicated that the overall model of predictors (which included all four predictor variables) was statistically significant for predicting those students who passed their first math course and for predicting those students who passed their first science course. Individually, all four predictor variables were found to be statistically significant for predicting those who had passed math, with the unweighted high school GPA and the highest math taken in high school accounting for the largest amount of unique variance. Those two variables also improved the regression model’s percentage of correctly predicting that dependent variable. The only variable that was found to be statistically significant for predicting those who had passed science was the students’ unweighted high school GPA. Overall, the results of my study have been offered as my contribution to the literature on predicting first-year student success, especially within the STEM disciplines.
Resumo:
An assessment of the sustainability of the Irish economy has been carried out using three methodologies, enabling comparison and evaluation of the advantages and disadvantages of each, and potential synergies among them. The three measures chosen were economy-wide Material Flow Analysis (MFA), environmentally extended input-output (EE-IO) analysis and the Ecological Footprint (EF). The research aims to assess the sustainability of the Irish economy using these methods and to draw conclusions on their effectiveness in policy making both individually and in combination. A theoretical description discusses the methods and their respective advantages and disadvantages and sets out a rationale for their combined application. The application of the methods in combination has provided insights into measuring the sustainability of a national economy and generated new knowledge on the collective application of these methods. The limitations of the research are acknowledged and opportunities to address these and build on and extend the research are identified. Building on previous research, it is concluded that a complete picture of sustainability cannot be provided by a single method and/or indicator.
Resumo:
The inclusion of non-ipsative measures of party preference (in essence ratings for each of the parties of a political system) has become established practice in mass surveys conducted for election studies. They exist in different forms, known as thermometer ratings or feeling scores, likes and dislikes scores, or support propensities. Usually only one of these is included in a single survey, which makes it difficult to assess the relative merits of each. The questionnaire of the Irish National Election Study 2002 (INES2002) contained three different batteries of non-ipsative party preferences. This paper investigates some of the properties of these different indicators. We focus in particular on two phenomena. First, the relationship between non-ipsative preferences and the choices actually made on the ballot. In Ireland this relationship is more revealing than in most other countries owing to the electoral system (STV) which allows voters to cast multiple ordered votes for candidates from different parties. Second, we investigate the latent structure of each of the batteries of party preferences and the relationships between them. We conclude that the three instruments are not interchangeable, that they measure different orientations, and that one –the propensity to vote for a party– is by far preferable if the purpose of the study is the explanation of voters’ actual choice behaviour. This finding has important ramifications for the design of election study questionnaires.
Resumo:
Across the nation, librarians work with caregivers and children to encourage engagement in their early literacy programs. However, these early literacy programs that libraries provide have been left mostly undocumented by research, especially through quantitative methods. Valuable Initiatives in Early Learning that Work Successfully (VIEWS2) was designed to test new ways to measure the effectiveness of these early literacy programs for young children (birth to kindergarten), leveraging a mixed methods, quasi-experimental design. Using two innovative tools, researchers collected data at 120 public library storytimes in the first year of research, observing approximately 1,440 children ranging from birth to 60 months of age. Analysis of year-one data showed a correlation between the early literacy content of the storytime program and children’s outcomes in terms of early literacy behaviors. These findings demonstrate that young children who attend public library storytimes are responding to the early literacy content in the storytime programs.
Resumo:
Trillas et al. (1999, Soft computing, 3 (4), 197–199) and Trillas and Cubillo (1999, On non-contradictory input/output couples in Zadeh's CRI proceeding, 28–32) introduced the study of contradiction in the framework of fuzzy logic because of the significance of avoiding contradictory outputs in inference processes. Later, the study of contradiction in the framework of Atanassov's intuitionistic fuzzy sets (A-IFSs) was initiated by Cubillo and Castiñeira (2004, Contradiction in intuitionistic fuzzy sets proceeding, 2180–2186). The axiomatic definition of contradiction measure was stated in Castiñeira and Cubillo (2009, International journal of intelligent systems, 24, 863–888). Likewise, the concept of continuity of these measures was formalized through several axioms. To be precise, they defined continuity when the sets ‘are increasing’, denominated continuity from below, and continuity when the sets ‘are decreasing’, or continuity from above. The aim of this paper is to provide some geometrical construction methods for obtaining contradiction measures in the framework of A-IFSs and to study what continuity properties these measures satisfy. Furthermore, we show the geometrical interpretations motivating the measures.
Resumo:
Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.
Resumo:
Recent association studies in multiple sclerosis (MS) have identified and replicated several single nucleotide polymorphism (SNP) susceptibility loci including CLEC16A, IL2RA, IL7R, RPL5, CD58, CD40 and chromosome 12q13–14 in addition to the well established allele HLA-DR15. There is potential that these genetic susceptibility factors could also modulate MS disease severity, as demonstrated previously for the MS risk allele HLA-DR15. We investigated this hypothesis in a cohort of 1006 well characterised MS patients from South-Eastern Australia. We tested the MS-associated SNPs for association with five measures of disease severity incorporating disability, age of onset, cognition and brain atrophy. We observed trends towards association between the RPL5 risk SNP and time between first demyelinating event and relapse, and between the CD40 risk SNP and symbol digit test score. No associations were significant after correction for multiple testing. We found no evidence for the hypothesis that these new MS disease risk-associated SNPs influence disease severity.
Resumo:
PURPOSE Accurate monitoring of prevalence and trends in population levels of physical activity is fundamental to the planning of health promotion and disease-prevention strategies. Test-retest reliability (repeatability) was assessed for four self-report measures of physical activity commonly used in population surveys: the Active Australia survey (AA, N=356), the short form of the International Physical Activity Questionnaire (IPAQ-S, N=104), the physical activity items in the Behavioral Risk Factor Surveillance System (BRFSS, N=127) and the physical activity items in the Australian National Health Survey (NHS, N=122). METHODS Percent agreement and Kappa statistics were used to assess the reliability of classification of activity status (where ‘active’= 150 minutes of activity per week) and sedentariness (where ‘sedentary’ = reporting no physical activity). Intraclass correlations (ICCs) were used to assess agreement on minutes of activity reported for each item of each survey and on total minutes reported in each survey. RESULTS Percent agreement scores for both activity status and sedentariness were very good on all four instruments. Overall the percent agreement between repeated surveys was between 73% (NHS) and 87% (IPAQ) for the criterion measure of achieving 150 minutes per week, and between 77% (NHS) and 89% (IPAQ) for the criterion of being sedentary. Corresponding Kappa statistics ranged from 0.46 (NHS) to 0.61 (AA) for activity status and from 0.20 (BRFSS) to 0.52 (AA) for sedentariness. For the individual items ICCs were highest for walking (0.45 to 0.56) and vigorous activity (0.22 to 0.64) and lowest for the moderate questions (0.16 to 0.44). CONCLUSION All four measures provide acceptable levels of test-retest reliability for assessing both activity status and sedentariness, and moderate reliability for assessing total minutes of activity. Supported by the Australian Commonwealth Department of Health and Ageing.
Resumo:
Recent advances in diffusion-weighted MRI (DWI) have enabled studies of complex white matter tissue architecture in vivo. To date, the underlying influence of genetic and environmental factors in determining central nervous system connectivity has not been widely studied. In this work, we introduce new scalar connectivity measures based on a computationally-efficient fast-marching algorithm for quantitative tractography. We then calculate connectivity maps for a DTI dataset from 92 healthy adult twins and decompose the genetic and environmental contributions to the variance in these metrics using structural equation models. By combining these techniques, we generate the first maps to directly examine genetic and environmental contributions to brain connectivity in humans. Our approach is capable of extracting statistically significant measures of genetic and environmental contributions to neural connectivity.
Resumo:
With the concerns over climate change and the escalation in worldwide population, sustainable development attracts more and more attention of academia, policy makers, and businesses in countries. Sustainable manufacturing is an inextricable measure to achieve sustainable development since manufacturing is one of the main energy consumers and greenhouse gas contributors. In the previous researches on production planning of manufacturing systems, environmental factor was rarely considered. This paper investigates the production planning problem under the performance measures of economy and environment with respect to seru production systems, a new manufacturing system praised as Double E (ecology and economy) in Japanese manufacturing industries. We propose a mathematical model with two objectives minimizing carbon dioxide emission and makespan for processing all product types by a seru production system. To solve this mathematical model, we develop an algorithm based on the non-dominated sorting genetic algorithm II. The computation results and analysis of three numeral examples confirm the effectiveness of our proposed algorithm. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
Numerous measures are used in the literature to describe the grain-size distribution of sediments. Consideration of these measures indicates that parameters computed from quartiles may not be as significant as those based on more rigorous statistical concepts. In addition, the lack of standardization of descriptive measures has resulted in limited application of the findings from one locality to another. The use of five parameters that serve as approximate graphic analogies to the moment measures commonly employed in statistics is recommended. The parameters are computed from five percentile diameters obtained from the cumulative size-frequency curve of a sediment. They include the mean (or median) diameter, standard deviation, kurtosis, and two measures of skewness, the second measure being sensitive to skew properties of the "tails" of the sediment distribution. If the five descriptive measures are listed for a sediment, it is possible to compute the five percentile diameters on which they are based (phi 5 , phi 16 , phi 50 , phi 84 , and phi 95 ), and hence five significant points on the cumulative carve of the sediment. This increases the value of the data listed for a sediment in a report, and in many cases eliminates the necessity of including the complete mechanical analysis of the sediment. The degree of correlation of the graphic parameters to the corresponding moment measures decreases as the distribution becomes more skew. However, for a fairly wide range of distributions, the first three moment measures can be ascertained from the graphic parameters with about the same degree of accuracy as is obtained by computing rough moment measures.