971 resultados para Measures of Noncompactness


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes that despite many experimental studies of thinking, and the development of models of thinking, such as Bruner's (1966) enactive, iconic and symbolic developmental modes, the imagery and inner verbal strategies used by children need further investigation to establish a coherent, theoretical basis from which to create experimental curricula for direct improvement of those strategies. Five hundred and twenty-three first, second and third year comprehensive school children were tested on 'recall' imagery, using a modified Betts Imagery Test; and a test of dual-coding processes (Paivio, 1971, p.179), by the P/W Visual/Verbal Questionnaire, measuring 'applied imagery' and inner verbalising. Three lines of investigation were pursued: 1. An investigation a. of hypothetical representational strategy differences between boys and girls; and b. the extent to which strategies change with increasing age. 2. The second and third year children's use of representational processes, were taken separately and compared with performance measures of perception, field independence, creativity, self-sufficiency and self-concept. 3. The second and third year children were categorised into four dual-coding strategy groups: a. High Visual/High Verbal b. Low Visual/High Verbal c. High Visual/Low Verbal d. Low Visual/Low Verbal These groups were compared on the same performance measures. The main result indicates that: 1. A hierarchy of dual-coding strategy use can be identified that is significantly related (.01, Binomial Test) to success or failure in the performance measures: the High Visual/High Verbal group registering the highest scores, the Low Visual/High Verbal and High Visual/Low Verbal groups registering intermediate scores, and the Low Visual/Low Verbal group registering the lowest scores on the performance measures. Subsidiary results indicate that: 2. Boys' use of visual strategies declines, and of verbal strategies increases, with age; girls' recall imagery strategy increases with age. Educational implications from the main result are discussed, the establishment of experimental curricula proposed, and further research suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lutein and zeaxanthin are lipid-soluble antioxidants found within the macula region of the retina. Links have been suggested between increased levels of these carotenoids and reduced risk for age-related macular disease (ARMD). Therefore, the effect of lutein-based supplementation on retinal and visual function in people with early stages of ARMD (age-related maculopathy, ARM) was assessed using multi-focal electroretinography (mfERG), contrast sensitivity and distance visual acuity. A total of fourteen participants were randomly allocated to either receive a lutein-based oral supplement (treated group) or no supplement (non-treated group). There were eight participants aged between 56 and 81 years (65·50 (sd 9·27) years) in the treated group and six participants aged between 61 and 83 years (69·67 (sd 7·52) years) in the non-treated group. Sample sizes provided 80 % power at the 5 % significance level. Participants attended for three visits (0, 20 and 40 weeks). At 60 weeks, the treated group attended a fourth visit following 20 weeks of supplement withdrawal. No changes were seen between the treated and non-treated groups during supplementation. Although not clinically significant, mfERG ring 3 N2 latency (P= 0·041) and ring 4 P1 latency (P= 0·016) increased, and a trend for reduction of mfERG amplitudes was observed in rings 1, 3 and 4 on supplement withdrawal. The statistically significant increase in mfERG latencies and the trend for reduced mfERG amplitudes on withdrawal are encouraging and may suggest a potentially beneficial effect of lutein-based supplementation in ARM-affected eyes. Copyright © 2012 The Authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research has highlighted several job characteristics salient to employee well-being and behavior for which there are no adequate generally applicable measures. These include timing and method control, monitoring and problem-solving demand, and production responsibility. In this article, an attempt to develop measures of these constructs provided encouraging results. Confirmatory factor analyses applied to data from 2 samples of shop-floor employees showed a consistent fit to a common 5-factor measurement model. Scales corresponding to each of the dimensions showed satisfactory internal and test–retest reliabilities. As expected, the scales also discriminated between employees in different jobs and employees working with contrasting technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We tested the hypothesis that the differences in performance between developmental dyslexics and controls on visual tasks are specific for the detection of dynamic stimuli. We found that dyslexics were less sensitive than controls to coherent motion in dynamic random dot displays. However, their sensitivity to control measures of static visual form coherence was not significantly different from that of controls. This dissociation of dyslexics' performance on measures that are suggested to tap the sensitivity of different extrastriate visual areas provides evidence for an impairment specific to the detection of dynamic properties of global stimuli, perhaps resulting from selective deficits in dorsal stream functions. © 2001 Lippincott Williams & Wilkins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context Many large organizations juggle an application portfolio that contains different applications that fulfill similar tasks in the organization. In an effort to reduce operating costs, they are attempting to consolidate such applications. Before consolidating applications, the work that is done with these applications must be harmonized. This is also known as process harmonization. Objective The increased interest in process harmonization calls for measures to quantify the extent to which processes have been harmonized. These measures should also uncover the factors that are of interest when harmonizing processes. Currently, such measures do not exist. Therefore, this study develops and validates a measurement model to quantify the level of process harmonization in an organization. Method The measurement model was developed by means of a literature study and structured interviews. Subsequently, it was validated through a survey, using factor analysis and correlations with known related constructs. Results As a result, a valid and reliable measurement model was developed. The factors that are found to constitute process harmonization are: the technical design of the business process and its data, the resources that execute the process, and the information systems that are used in the process. In addition, strong correlations were found between process harmonization and process standardization and between process complexity and process harmonization. Conclusion The measurement model can be used by practitioners, because it shows them the factors that must be taken into account when harmonizing processes, and because it provides them with a means to quantify the extent to which they succeeded in harmonizing their processes. At the same time, it can be used by researchers to conduct further empirical research in the area of process harmonization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of mangrove restoration projects should be to improve community structure and ecosystem function of degraded coastal landscapes. This requires the ability to forecast how mangrove structure and function will respond to prescribed changes in site conditions including hydrology, topography, and geophysical energies. There are global, regional, and local factors that can explain gradients of regulators (e.g., salinity, sulfides), resources (nutrients, light, water), and hydroperiod (frequency, duration of flooding) that collectively account for stressors that result in diverse patterns of mangrove properties across a variety of environmental settings. Simulation models of hydrology, nutrient biogeochemistry, and vegetation dynamics have been developed to forecast patterns in mangroves in the Florida Coastal Everglades. These models provide insight to mangrove response to specific restoration alternatives, testing causal mechanisms of system degradation. We propose that these models can also assist in selecting performance measures for monitoring programs that evaluate project effectiveness. This selection process in turn improves model development and calibration for forecasting mangrove response to restoration alternatives. Hydrologic performance measures include soil regulators, particularly soil salinity, surface topography of mangrove landscape, and hydroperiod, including both the frequency and duration of flooding. Estuarine performance measures should include salinity of the bay, tidal amplitude, and conditions of fresh water discharge (included in the salinity value). The most important performance measures from the mangrove biogeochemistry model should include soil resources (bulk density, total nitrogen, and phosphorus) and soil accretion. Mangrove ecology performance measures should include forest dimension analysis (transects and/or plots), sapling recruitment, leaf area index, and faunal relationships. Estuarine ecology performance measures should include the habitat function of mangroves, which can be evaluated with growth rate of key species, habitat suitability analysis, isotope abundance of indicator species, and bird census. The list of performance measures can be modified according to the model output that is used to define the scientific goals during the restoration planning process that reflect specific goals of the project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context: Accurately determining hydration status is a preventative measure for exertional heat illnesses (EHI). Objective: To determine the validity of various field measures of urine specific gravity (Usg) compared to laboratory instruments. Design: Observational research design to compare measures of hydration status: urine reagent strips (URS) and a urine color (Ucol) chart to a refractometer. Setting: We utilized the athletic training room of a Division I-A collegiate American football team. Participants: Trial 1 involved urine samples of 69 veteran football players (age=20.1+1.2yr; body mass=229.7+44.4lb; height=72.2+2.1in). Trial 2 involved samples from 5 football players (age=20.4+0.5yr; body mass=261.4+39.2lb; height=72.3+2.3in). Interventions: We administered the Heat Illness Index Score (HIIS) Risk Assessment, to identify athletes at-risk for EHI (Trial 1). For individuals “at-risk” (Trial 2), we collected urine samples before and after 15 days of pre-season “two-a-day” practices in a hot, humid environment(mean on-field WBGT=28.84+2.36oC). Main Outcome Measures: Urine samples were immediately analyzed for Usg using a refractometer, Diascreen 7® (URS1), Multistix® (URS2), and Chemstrip10® (URS3). Ucol was measured using Ucol chart. We calculated descriptive statistics for all main measures; Pearson correlations to assess relationships between the refractometer, each URS, and Ucol, and transformed Ucol data to Z-scores for comparison to the refractometer. Results: In Trial 1, we found a moderate relationship (r=0.491, p<.01) between URS1 (1.020+0.006μg) and the refractometer (1.026+0.010μg). In Trial 2, we found marked relationships for Ucol (5.6+1.6shades, r=0.619, p<0.01), URS2 (1.019+0.008μg, r=0.712, p<0.01), and URS3 (1.022+0.007μg, r=0.689, p<0.01) compared to the refractometer (1.028+0.008μg). Conclusions: Our findings suggest that URS were inconsistent between manufacturers, suggesting practitioners use the clinical refractometer to accurately determine Usg and monitor hydration status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To determine whether the ‘through-focus’ aberrations of a multifocal and accommodative intraocular lens (IOL) implanted patient can be used to provide rapid and reliable measures of their subjective range of clear vision. Methods: Eyes that had been implanted with a concentric (n = 8), segmented (n = 10) or accommodating (n = 6) intraocular lenses (mean age 62.9 ± 8.9 years; range 46-79 years) for over a year underwent simultaneous monocular subjective (electronic logMAR test chart at 4m with letters randomised between presentations) and objective (Aston open-field aberrometer) defocus curve testing for levels of defocus between +1.50 to -5.00DS in -0.50DS steps, in a randomised order. Pupil size and ocular aberration (a combination of the patient’s and the defocus inducing lens aberrations) at each level of blur was measured by the aberrometer. Visual acuity was measured subjectively at each level of defocus to determine the traditional defocus curve. Objective acuity was predicted using image quality metrics. Results: The range of clear focus differed between the three IOL types (F=15.506, P=0.001) as well as between subjective and objective defocus curves (F=6.685, p=0.049). There was no statistically significant difference between subjective and objective defocus curves in the segmented or concentric ring MIOL group (P>0.05). However a difference was found between the two measures and the accommodating IOL group (P<0.001). Mean Delta logMAR (predicted minus measured logMAR) across all target vergences was -0.06 ± 0.19 logMAR. Predicted logMAR defocus curves for the multifocal IOLs did not show a near vision addition peak, unlike the subjective measurement of visual acuity. However, there was a strong positive correlation between measured and predicted logMAR for all three IOLs (Pearson’s correlation: P<0.001). Conclusions: Current subjective procedures are lengthy and do not enable important additional measures such as defocus curves under differently luminance or contrast levels to be assessed, which may limit our understanding of MIOL performance in real-world conditions. In general objective aberrometry measures correlated well with the subjective assessment indicating the relative robustness of this technique in evaluating post-operative success with segmented and concentric ring MIOL.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the past several years, U.S. colleges and universities have faced increased pressure to improve retention and graduation rates. At the same time, educational institutions have placed a greater emphasis on the importance of enrolling more students in STEM (science, technology, engineering and mathematics) programs and producing more STEM graduates. The resulting problem faced by educators involves finding new ways to support the success of STEM majors, regardless of their pre-college academic preparation. The purpose of my research study involved utilizing first-year STEM majors’ math SAT scores, unweighted high school GPA, math placement test scores, and the highest level of math taken in high school to develop models for predicting those who were likely to pass their first math and science courses. In doing so, the study aimed to provide a strategy to address the challenge of improving the passing rates of those first-year students attempting STEM-related courses. The study sample included 1018 first-year STEM majors who had entered the same large, public, urban, Hispanic-serving, research university in the Southeastern U.S. between 2010 and 2012. The research design involved the use of hierarchical logistic regression to determine the significance of utilizing the four independent variables to develop models for predicting success in math and science. The resulting data indicated that the overall model of predictors (which included all four predictor variables) was statistically significant for predicting those students who passed their first math course and for predicting those students who passed their first science course. Individually, all four predictor variables were found to be statistically significant for predicting those who had passed math, with the unweighted high school GPA and the highest math taken in high school accounting for the largest amount of unique variance. Those two variables also improved the regression model’s percentage of correctly predicting that dependent variable. The only variable that was found to be statistically significant for predicting those who had passed science was the students’ unweighted high school GPA. Overall, the results of my study have been offered as my contribution to the literature on predicting first-year student success, especially within the STEM disciplines.