857 resultados para change-of-direction patterns


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To measure the influence of spherical intraocular lens implantation and conventional myopic laser in situ keratomileusis on peripheral ocular aberrations. Setting: Visual & Ophthalmic Optics Laboratory, School of Optometry & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia. Methods: Peripheral aberrations were measured using a modified commercial Hartmann-Shack aberrometer across 42° x 32° of the central visual field in 6 subjects after spherical intraocular lens (IOL) implantation and in 6 subjects after conventional laser in situ keratomileusis (LASIK) for myopia. The results were compared with those of age matched emmetropic and myopic control groups. Results: The IOL group showed a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, and greater rates of change of higher-order root-mean-square aberrations and total root-mean-square aberrations across the visual field than its emmetropic control group. However, coma trends were similar for the two groups. The LASIK group had a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, the opposite trend in coma across the field, and greater higher-order root-mean-square aberrations and total root-mean-square aberrations than its myopic control group. Conclusion: Spherical IOL implantation and conventional myopia LASIK increase ocular peripheral aberrations. They cause considerable increase in spherical aberration across the visual field. LASIK reverses the sign of the rate of change in coma across the field relative to that of the other groups. Keywords: refractive surgery, LASIK, IOL implantation, aberrations, peripheral aberrations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of feedback technologies, in the form of products such as Smart Meters, is increasingly seen as the means by which 'consumers' can be made aware of their patterns of resource consumption, and to then use this enhanced awareness to change their behaviour to reduce the environmental impacts of their consumption. These technologies tend to be single-resource focused (e.g. on electricity consumption only) and their functionality defined by persons other than end-users (e.g. electricity utilities). This paper presents initial findings of end-users' experiences with a multi-resource feedback technology, within the context of sustainable housing. It proposes that an understanding of user context, supply chain management and market diffusion issues are important design considerations that contribute to technology 'success'.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental observations of cell migration often describe the presence of mesoscale patterns within motile cell populations. These patterns can take the form of cells moving as aggregates or in chain-like formation. Here we present a discrete model capable of producing mesoscale patterns. These patterns are formed by biasing movements to favor a particular configuration of agent–agent attachments using a binding function f(K), where K is the scaled local coordination number. This discrete model is related to a nonlinear diffusion equation, where we relate the nonlinear diffusivity D(C) to the binding function f. The nonlinear diffusion equation supports a range of solutions which can be either smooth or discontinuous. Aggregation patterns can be produced with the discrete model, and we show that there is a transition between the presence and absence of aggregation depending on the sign of D(C). A combination of simulation and analysis shows that both the existence of mesoscale patterns and the validity of the continuum model depend on the form of f. Our results suggest that there may be no formal continuum description of a motile system with strong mesoscale patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Clinical practice and clinical research has made a concerted effort to move beyond the use of clinical indicators alone and embrace patient focused care through the use of patient reported outcomes such as healthrelated quality of life. However, unless patients give consistent consideration to the health states that give meaning to measurement scales used to evaluate these constructs, longitudinal comparison of these measures may be invalid. This study aimed to investigate whether patients give consideration to a standard health state rating scale (EQ-VAS) and whether consideration of good and poor health state descriptors immediately changes their selfreport. Methods: A randomised crossover trial was implemented amongst hospitalised older adults (n = 151). Patients were asked to consider descriptions of extremely good (Description-A) and poor (Description-B) health states. The EQ-VAS was administered as a self-report at baseline, after the first descriptors (A or B), then again after the remaining descriptors (B or A respectively). At baseline patients were also asked if they had considered either EQVAS anchors. Results: Overall 106/151 (70%) participants changed their self-evaluation by ≥5 points on the 100 point VAS, with a mean (SD) change of +4.5 (12) points (p < 0.001). A total of 74/151 (49%) participants did not consider the best health VAS anchor, of the 77 who did 59 (77%) thought the good health descriptors were more extreme (better) then they had previously considered. Similarly 85/151 (66%) participants did not consider the worst health anchor of the 66 who did 63 (95%) thought the poor health descriptors were more extreme (worse) then they had previously considered. Conclusions: Health state self-reports may not be well considered. An immediate significant shift in response can be elicited by exposure to a mere description of an extreme health state despite no actual change in underlying health state occurring. Caution should be exercised in research and clinical settings when interpreting subjective patient reported outcomes that are dependent on brief anchors for meaning. Trial Registration: Australian and New Zealand Clinical Trials Registry (#ACTRN12607000606482) http://www.anzctr. org.au

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years several scientific Workflow Management Systems (WfMSs) have been developed with the aim to automate large scale scientific experiments. As yet, many offerings have been developed, but none of them has been promoted as an accepted standard. In this paper we propose a pattern-based evaluation of three among the most widely used scientific WfMSs: Kepler, Taverna and Triana. The aim is to compare them with traditional business WfMSs, emphasizing the strengths and deficiencies of both systems. Moreover, a set of new patterns is defined from the analysis of the three considered systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Autonomous underwater vehicles (AUVs) are increasingly used, both in military and civilian applications. These vehicles are limited mainly by the intelligence we give them and the life of their batteries. Research is active to extend vehicle autonomy in both aspects. Our intent is to give the vehicle the ability to adapt its behavior under different mission scenarios (emergency maneuvers versus long duration monitoring). This involves a search for optimal trajectories minimizing time, energy or a combination of both. Despite some success stories in AUV control, optimal control is still a very underdeveloped area. Adaptive control research has contributed to cost minimization problems, but vehicle design has been the driving force for advancement in optimal control research. We look to advance the development of optimal control theory by expanding the motions along which AUVs travel. Traditionally, AUVs have taken the role of performing the long data gathering mission in the open ocean with little to no interaction with their surroundings, MacIver et al. (2004). The AUV is used to find the shipwreck, and the remotely operated vehicle (ROV) handles the exploration up close. AUV mission profiles of this sort are best suited through the use of a torpedo shaped AUV, Bertram and Alvarez (2006), since straight lines and minimal (0 deg - 30 deg) angular displacements are all that are necessary to perform the transects and grid lines for these applications. However, the torpedo shape AUV lacks the ability to perform low-speed maneuvers in cluttered environments, such as autonomous exploration close to the seabed and around obstacles, MacIver et al. (2004). Thus, we consider an agile vehicle capable of movement in six degrees of freedom without any preference of direction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years there has been widespread interest in patterns, perhaps provoked by a realisation that they constitute a fundamental brain activity and underpin many artificial intelligence systems. Theorised concepts of spatial patterns including scale, proportion, and symmetry, as well as social and psychological understandings are being revived through digital/parametric means of visualisation and production. The effect of pattern as an ornamental device has also changed from applied styling to mediated dynamic effect. The interior has also seen patterned motifs applied to wall coverings, linen, furniture and artefacts with the effect of enhancing aesthetic appreciation, or in some cases causing psychological and/or perceptual distress (Rodemann 1999). ----- ----- While much of this work concerns a repeating array of surface treatment, Philip Ball’s The Self- Made Tapestry: Pattern Formation in Nature (1999) suggests a number of ways that patterns are present at the macro and micro level, both in their formation and disposition. Unlike the conventional notion of a pattern being the regular repetition of a motif (geometrical or pictorial) he suggests that in nature they are not necessarily restricted to a repeating array of identical units, but also include those that are similar rather than identical (Ball 1999, 9). From his observations Ball argues that they need not necessarily all be the same size, but do share similar features that we recognise as typical. Examples include self-organized patterns on a grand scale such as sand dunes, or fractal networks caused by rivers on hills and mountains, through to patterns of flow observed in both scientific experiments and the drawings of Leonardo da Vinci.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While Business Process Management (BPM) is an established discipline, the increased adoption of BPM technology in recent years has introduced new challenges. One challenge concerns dealing with the ever-growing complexity of business process models. Mechanisms for dealing with this complexity can be classified into two categories: 1) those that are solely concerned with the visual representation of the model and 2) those that change its inner structure. While significant attention is paid to the latter category in the BPM literature, this paper focuses on the former category. It presents a collection of patterns that generalize and conceptualize various existing mechanisms to change the visual representation of a process model. Next, it provides a detailed analysis of the degree of support for these patterns in a number of state-of-the-art languages and tools. This paper concludes with the results of a usability evaluation of the patterns conducted with BPM practitioners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The combination of alcohol and driving is a major health and economic burden to most communities in industrialised countries. The total cost of crashes for Australia in 1996 was estimated at approximately 15 billion dollars and the costs for fatal crashes were about 3 billion dollars (BTE, 2000). According to the Bureau of Infrastructure, Transport and Regional Development and Local Government (2009; BITRDLG) the overall cost of road fatality crashes for 2006 $3.87 billion, with a single fatal crash costing an estimated $2.67 million. A major contributing factor to crashes involving serious injury is alcohol intoxication while driving. It is a well documented fact that consumption of liquor impairs judgment of speed, distance and increases involvement in higher risk behaviours (Waller, Hansen, Stutts, & Popkin, 1986a; Waller et al., 1986b). Waller et al. (1986a; b) asserts that liquor impairs psychomotor function and therefore renders the driver impaired in a crisis situation. This impairment includes; vision (degraded), information processing (slowed), steering, and performing two tasks at once in congested traffic (Moskowitz & Burns, 1990). As BAC levels increase the risk of crashing and fatality increase exponentially (Department of Transport and Main Roads, 2009; DTMR). According to Compton et al. (2002) as cited in the Department of Transport and Main Roads (2009), crash risk based on probability, is five times higher when the BAC is 0.10 compared to a BAC of 0.00. The type of injury patterns sustained also tends to be more severe when liquor is involved, especially with injuries to the brain (Waller et al., 1986b). Single and Rohl (1997) reported that 30% of all fatal crashes in Australia where alcohol involvement was known were associated with Breadth Analysis Content (BAC) above the legal limit of 0.05gms/100ml. Alcohol related crashes therefore contributes to a third of the total cost of fatal crashes (i.e. $1 billion annually) and crashes where alcohol is involved are more likely to result in death or serious injury (ARRB Transport Research, 1999). It is a major concern that a drug capable of impairment such as is the most available and popular drug in Australia (Australian Institute of Health and Welfare, 2007; AIHW). According to the AIHW (2007) 89.9% of the approximately 25,000 Australians over the age of 14 surveyed had consumed at some point in time, and 82.9% had consumed liquor in the previous year. This study found that 12.1% of individuals admitted to driving a motor vehicle whilst intoxicated. In general males consumed more liquor in all age groups. In Queensland there were 21503 road crashes in 2001, involving 324 fatalities and the largest contributing factor was alcohol and or drugs (Road Traffic Report, 2001). 23438 road crashes in 2004, involving 289 fatalities and the largest contributing factor was alcohol and or drugs (DTMR, 2009). Although a number of measures such as random breath testing have been effective in reducing the road toll (Watson, Fraine & Mitchell, 1995) the recidivist drink driver remains a serious problem. These findings were later supported with research by Leal, King, and Lewis (2006). This Queensland study found that of the 24661 drink drivers intercepted in 2004, 3679 (14.9%) were recidivists with multiple drink driving convictions in the previous three years covered (Leal et al., 2006). The legal definition of the term “recidivist” is consistent with the Transport Operations (Road Use Management) Act (1995) and is assigned to individuals who have been charged with multiple drink driving offences in the previous five years. In Australia relatively little attention has been given to prevention programs that target high-risk repeat drink drivers. However, over the last ten years a rehabilitation program specifically designed to reduce recidivism among repeat drink drivers has been operating in Queensland. The program, formally known as the “Under the Limit” drink driving rehabilitation program (UTL) was designed and implemented by the research team at the Centre for Accident Research and Road Safety in Queensland with funding from the Federal Office of Road Safety and the Institute of Criminology (see Sheehan, Schonfeld & Davey, 1995). By 2009 over 8500 drink-drivering offenders had been referred to the program (Australian Institute of Crime, 2009).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infrared spectroscopy has been used to study the adsorption of paranitrophenol on mono, di and tri alkyl surfactant intercalated montmorillonite. Organoclays were obtained by the cationic exchange of mono, di and tri alkyl chain surfactants for sodium ions [hexadecyltrimethylammonium bromide (HDTMAB), dimethyldioctadecylammonium bromide (DDOAB), methyltrioctadecylammonium bromide (MTOAB)] in an aqueous solution with Na-montmorillonite. Upon formation of the organoclay, the properties change from strongly hydrophilic to strongly hydrophobic. This change in surface properties is observed by a decrease in intensity of the OH stretching vibrations assigned to water in the cation hydration sphere of the montmorillonite. As the cation is replaced by the surfactant molecules the paranitrophenol replaces the surfactant molecules in the clay interlayer. Bands attributed to CH stretching and bending vibrations change for the surfactant intercalated montmorillonite. Strong changes occur in the HCH deformation modes of the methyl groups of the surfactant. These changes are attributed to the methyl groups locking into the siloxane surface of the montmorillonite. Such a concept is supported by changes in the SiO stretching bands of the montmorillonite siloxane surface. This study demonstrates that paranitrophenol will penetrate into the untreated clay interlayer and replace the intercalated surfactant in surfactant modified clay, resulting in the change of the arrangement of the intercalated surfactant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaining an improved understanding of people diagnosed with schizophrenia has the potential to influence priorities for therapy. Psychosis is commonly understood through the perspective of the medical model. However, the experience of social context surrounding psychosis is not well understood. In this research project we used a phenomenological methodology with a longitudinal design to interview 7 participants across a 12-month period to understand the social experiences surrounding psychosis. Eleven themes were explicated and divided into two phases of the illness experience: (a) transition into emotional shutdown included the experiences of not being acknowledged, relational confusion, not being expressive, detachment, reliving the past, and having no sense of direction; and (b) recovery from emotional shutdown included the experiences of being acknowledged, expression, resolution, independence, and a sense of direction. The experiential themes provide clinicians with new insights to better assess vulnerability, and have the potential to inform goals for therapy.