36 resultados para proximity query, collision test, distance test, data compression, triangle test
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Background: Age-related macular degeneration (ARMD) is the leading cause of visual disability in people over 60 years of age in the developed world. The success of treatment deteriorates with increased latency of diagnosis. The purpose of this study was to determine the reliability of the macular mapping test (MMT), and to investigate its potential as a screening tool. Methods: The study population comprised of 31 healthy eyes of 31 participants. To assess reliability, four macular mapping test (MMT) measurements were taken in two sessions separated by one hour by two practitioners, with reversal of order in the second session. MMT readings were also taken from 17 age-related maculopathy (ARM), and 12 AMD affected eyes. Results: For the normal cohort, average MMT scores ranged from 85.5 to 100.0 MMT points. Scores ranged from 79.0 to 99.0 for the ARM group and from 9.0 to 92.0 for the AMD group. MMT scores were reliable to within ± 7.0 points. The difference between AMD affected eyes and controls (z = 3.761, p = < 0.001) was significant. The difference between ARM affected eyes and controls was not significant (z = -0.216, p = 0.829). Conclusion: The reliability data shows that a change of 14 points or more is required to indicate a clinically significant change. This value is required for use of the MMT as an outcome measure in clinical trials. Although there was no difference between MMT scores from ARM affected eyes and controls, the MMT has the advantage over the Amsler grid in that it uses a letter target, has a peripheral fixation aid, and it provides a numerical score. This score could be beneficial in office and home monitoring of AMD progression, as well as an outcome measure in clinical research. © 2005 Bartlett et al; licensee BioMed Central Ltd.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
Objectives - Powdered and granulated particulate materials make up most of the ingredients of pharmaceuticals and are often at risk of undergoing unwanted agglomeration, or caking, during transport or storage. This is particularly acute when bulk powders are exposed to extreme swings in temperature and relative humidity, which is now common as drugs are produced and administered in increasingly hostile climates and are stored for longer periods of time prior to use. This study explores the possibility of using a uniaxial unconfined compression test to compare the strength of caked agglomerates exposed to different temperatures and relative humidities. This is part of a longer-term study to construct a protocol to predict the caking tendency of a new bulk material from individual particle properties. The main challenge is to develop techniques that provide repeatable results yet are presented simply enough to be useful to a wide range of industries. Methods - Powdered sucrose, a major pharmaceutical ingredient, was poured into a split die and exposed to high and low relative humidity cycles at room temperature. The typical ranges were 20–30% for the lower value and 70–80% for the higher value. The outer die casing was then removed and the resultant agglomerate was subjected to an unconfined compression test using a plunger fitted to a Zwick compression tester. The force against displacement was logged so that the dynamics of failure as well as the failure load of the sample could be recorded. The experimental matrix included varying the number of cycles, the amount between the maximum and minimum relative humidity, the height and diameters of the samples, the number of cycles and the particle size. Results - Trends showed that the tensile strength of the agglomerates increased with the number of cycles and also with the more extreme swings in relative humidity. This agrees with previous work on alternative methods of measuring the tensile strength of sugar agglomerates formed from humidity cycling (Leaper et al 2003). Conclusions - The results show that at the very least the uniaxial tester is a good comparative tester to examine the caking tendency of powdered materials, with a simple arrangement and operation that are compatible with the requirements of industry. However, further work is required to continue to optimize the height/ diameter ratio during tests.
Resumo:
This research addressed the question: "Which factors predict the effectiveness of healthcare teams?" It was addressed by assessing the psychometric properties of a new measure of team functioning with the use of data collected from 797 team members in 61 healthcare teams. This new measure is the Aston Team Performance Inventory (ATPI) developed by West, Markiewicz and Dawson (2005) and based on the IPO model. The ATPI was pilot tested in order to examine the reliability of this measure in the Jordanian cultural context. A sample of five teams comprising 3-6 members each was randomly selected from the Jordan Red Crescent health centers in Amman. Factors that predict team effectiveness were explored in a Jordanian sample (comprising 1622 members in 277 teams with 255 leaders from healthcare teams in hospitals in Amman) using self-report and Leader Ratings measures adapted from work by West, Borrill et al (2000) to determine team effectiveness and innovation from the leaders' point of view. The results demonstrate the validity and reliability of the measures for use in healthcare settings. Team effort and skills and leader managing had the strongest association with team processes in terms of team objectives, reflexivity, participation, task focus, creativity and innovation. Team inputs in terms of task design, team effort and skills, and organizational support were associated with team effectiveness and innovation whereas team resources were associated only with team innovation. Team objectives had the strongest mediated and direct association with team effectiveness whereas task focus had the strongest mediated and direct association with team innovation. Finally, among leadership variables, leader managing had the strongest association with team effectiveness and innovation. The theoretical and practical implications of this thesis are that: team effectiveness and innovation are influenced by multiple factors that must all be taken into account. The key factors managers need to ensure are in place for effective teams are team effort and skills, organizational support and team objectives. To conclude, the application of these findings to healthcare teams in Jordan will help improve their team effectiveness, and thus the healthcare services that they provide.
Resumo:
The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. Among the nested model specifications that are investigated, in 11 out of 12 cases, daily returns are most appropriately characterized by a variant of the MMAR that applies a multifractal time-deformation process to NIID returns. There is no evidence of long memory.
Resumo:
Aim: Contrast sensitivity (CS) provides important information on visual function. This study aimed to assess differences in clinical expediency of the CS increment-matched new back-lit and original paper versions of the Melbourne Edge Test (MET) to determine the CS of the visually impaired. Methods: The back-lit and paper MET were administered to 75 visually impaired subjects (28-97 years). Two versions of the back-lit MET acetates were used to match the CS increments with the paper-based MET. Measures of CS were repeated after 30 min and again in the presence of a focal light source directed onto the MET. Visual acuity was measured with a Bailey-Lovie chart and subjects rated how much difficulty they had with face and vehicle recognition. Results: The back-lit MET gave a significantly higher CS than the paper-based version (14.2 ± 4.1 dB vs 11.3 ± 4.3 dB, p < 0.001). A significantly higher reading resulted with repetition of the paper-based MET (by 1.0 ± 1.7 dB, p < 0.001), but this was not evident with the back-lit MET (by 0.1 ± 1.4 dB, p = 0.53). The MET readings were increased by a focal light source, in both the back-lit (by 0.3 ± 0.81, p < 0.01) and paper-based (1.2 ± 1.7, p < 0.001) versions. CS as measured by the back-lit and paper-based versions of the MET was significantly correlated to patients' perceived ability to recognise faces (r = 0.71, r = 0.85 respectively; p < 0.001) and vehicles (r = 0.67, r = 0.82 respectively; p < 0.001), and with distance visual acuity (both r =-0.64; p < 0.001). Conclusions: The CS increment-matched back-lit MET gives higher CS values than the old paper-based test by approximately 3 dB and is more repeatable and less affected by external light sources. Clinically, the MET score provides information on patient difficulties with visual tasks, such as recognising faces. © 2005 The College of Optometrists.
Resumo:
We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance. © 2011 American Psychological Association.
Resumo:
Astrocytes are essential for neuronal function and survival, so both cell types were included in a human neurotoxicity test-system to assess the protective effects of astrocytes on neurons, compared with a culture of neurons alone. The human NT2.D1 cell line was differentiated to form either a co-culture of post-mitotic NT2.N neuronal (TUJ1, NF68 and NSE positive) and NT2.A astrocytic (GFAP positive) cells (∼2:1 NT2.A:NT2.N), or an NT2.N mono-culture. Cultures were exposed to human toxins, for 4 h at sub-cytotoxic concentrations, in order to compare levels of compromised cell function and thus evidence of an astrocytic protective effect. Functional endpoints examined included assays for cellular energy (ATP) and glutathione (GSH) levels, generation of hydrogen peroxide (H2O2) and caspase-3 activation. Generally, the NT2.N/A co-culture was more resistant to toxicity, maintaining superior ATP and GSH levels and sustaining smaller significant increases in H2O2 levels compared with neurons alone. However, the pure neuronal culture showed a significantly lower level of caspase activation. These data suggest that besides their support for neurons through maintenance of ATP and GSH and control of H2O2 levels, following exposure to some substances, astrocytes may promote an apoptotic mode of cell death. Thus, it appears the use of astrocytes in an in vitro predictive neurotoxicity test-system may be more relevant to human CNS structure and function than neuronal cells alone. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Background: The Melbourne Edge Test (MET) is a portable forced-choice edge detection contrast sensitivity (CS) test. The original externally illuminated paper test has been superseded by a backlit version. The aim of this study was to establish normative values for age and to assess change with visual impairment. Method: The MET was administered to 168 people with normal vision (18-93 years old) and 93 patients with visual impairment (39-97 years old). Distance visual acuity (VA) was measured with a log MAR chart. Results: In those eyes without disease, MET CS was stable until the age of 50 years (23.8 ± .7 dB) after which it decreased at a rate of ≈1.5 dB per decade. Compared with normative values, people with low vision were found to have significantly reduced CS, which could not be totally accounted for by reduced VA. Conclusions: The MET provides a quick and easy measure of CS, which highlights a reduction in visual function that may not be detectable using VA measurements. © 2004 The College of Optometrists.
Resumo:
The sign test is a simple non-parametric test which can be used on paired data, i.e., two related samples, matched samples, or repeated measurements on the same sample. It was developed by Wilcoxon before the more powerful and familiar ‘Wilcoxon signed-rank test’ described in a previous statnote . This statnote describes the use of the sign test with reference to two scenarios: (1) to compare the cleanliness of two hospital wards as assessed by a sample of observers and (2) to compare bacterial contamination on cloths and sponges from a domestic kitchen.
Resumo:
Cochran's Q-test is a non-parametric analysis which can be applied to a two-way design in which the data are binomial and can take only two possible outcomes, e.g., 0 or 1, alive or dead, present or absent, clean or dirty, infected or non-infected, and is an extension to the binomial tests introduced in Statnote 39 . This statnote describes the application of this test in the analysis of the changes which occur in the fungal flora of forestry nursery beds after two different sterilization procedures .
Resumo:
We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures. © 2012 Psychonomic Society, Inc.
Resumo:
Aim: To evaluate OneTouch® Verio™ test strip performance at hypoglycaemic blood glucose (BG) levels (<3.9mmol/L [<70mg/dL]) at seven clinical studies. Methods: Trained clinical staff performed duplicate capillary BG monitoring system tests on 700 individuals with type 1 and type 2 diabetes using blood from a single fingerstick lancing. BG reference values were obtained using a YSI 2300 STAT™ Glucose Analyzer. The number and percentage of BG values within ±0.83. mmol/L (±15. mg/dL) and ±0.56. mmol/L (±10. mg/dL) were calculated at BG concentrations of <3.9. mmol/L (<70. mg/dL), <3.3. mmol/L (<60. mg/dL), and <2.8. mmol/L (<50. mg/dL). Results: At BG concentrations <3.9. mmol/L (<70. mg/dL), 674/674 (100%) of meter results were within ±0.83. mmol/L (±15. mg/dL) and 666/674 (98.8%) were within ±0.56. mmol/L (±10. mg/dL) of reference values. At BG concentrations <3.3. mmol/L (<60. mg/dL), and <2.8. mmol/L (<50. mg/dL), 358/358 (100%) and 270/270 (100%) were within ±0.56. mmol/L (±10. mg/dL) of reference values, respectively. Conclusion: In this analysis of data from seven independent studies, OneTouch Verio test strips provide highly accurate results at hypoglycaemic BG levels. © 2012 Elsevier Ireland Ltd.
Resumo:
Chlamydia is a common sexually transmitted infection that has potentially serious consequences unless detected and treated early. The health service in the UK offers clinic-based testing for chlamydia but uptake is low. Identifying the predictors of testing behaviours may inform interventions to increase uptake. Self-tests for chlamydia may facilitate testing and treatment in people who avoid clinic-based testing. Self-testing and being tested by a health care professional (HCP) involve two contrasting contexts that may influence testing behaviour. However, little is known about how predictors of behaviour differ as a function of context. In this study, theoretical models of behaviour were used to assess factors that may predict intention to test in two different contexts: self-testing and being tested by a HCP. Individuals searching for or reading about chlamydia testing online were recruited using Google Adwords. Participants completed an online questionnaire that addressed previous testing behaviour and measured constructs of the Theory of Planned Behaviour and Protection Motivation Theory, which propose a total of eight possible predictors of intention. The questionnaire was completed by 310 participants. Sufficient data for multiple regression were provided by 102 and 118 respondents for self-testing and testing by a HCP respectively. Intention to self-test was predicted by vulnerability and self-efficacy, with a trend-level effect for response efficacy. Intention to be tested by a HCP was predicted by vulnerability, attitude and subjective norm. Thus, intentions to carry out two testing behaviours with very similar goals can have different predictors depending on test context. We conclude that interventions to increase self-testing should be based on evidence specifically related to test context.