975 resultados para limitations of therapy


Relevância:

90.00% 90.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In an environment where it has become increasingly difficult to attract consumer attention, marketers have begun to explore alternative forms of marketing communication. One such form that has emerged is product placement, which has more recently appeared in electronic games. Given changes in media consumption and the growth of the games industry, it is not surprising that games are being exploited as a medium for promotional content. Other market developments are also facilitating and encouraging their use, in terms of both the insertion of brand messages into video games and the creation of brand-centred environments, labelled ‘advergames’. However, while there is much speculation concerning the beneficial outcomes for marketers, there remains a lack of academic work in this area and little empirical evidence of the actual effects of this form of promotion on game players. Only a handful of studies are evident in the literature, which have explored the influence of game placements on consumers. The majority have studied their effect on brand awareness, largely demonstrating that players can recall placed brands. Further, most research conducted to date has focused on computer and online games, but consoles represent the dominant platform for play (Taub, 2004). Finally, advergames have largely been neglected, particularly those in a console format. Widening the gap in the literature is the fact that insufficient academic attention has been given to product placement as a marketing communication strategy overall, and to games in general. The unique nature of the strategy also makes it difficult to apply existing literature to this context. To address a significant need for information in both the academic and business domains, the current research investigates the effects of brand and product placements in video games and advergames on consumer attitude to the brand and corporate image. It was conducted in two stages. Stage one represents a pilot study. It explored the effects of use simulated and peripheral placements in video games on players’ and observers’ attitudinal responses, and whether these are influenced by involvement with a product category or skill level in the game. The ability of gamers to recall placed brands was also examined. A laboratory experiment was employed with a small sample of sixty adult subjects drawn from an Australian east-coast university, some of who were exposed to a console video game on a television set. The major finding of study one is that placements in a video game have no effect on gamers’ attitudes, but they are recalled. For stage two of the research, a field experiment was conducted with a large, random sample of 350 student respondents to investigate the effects on players of brand and product placements in handheld video games and advergames. The constructs of brand attitude and corporate image were again tested, along with several potential confounds. Consistent with the pilot, the results demonstrate that product placement in electronic games has no effect on players’ brand attitudes or corporate image, even when allowing for their involvement with the product category, skill level in the game, or skill level in relation to the medium. Age and gender also have no impact. However, the more interactive a player perceives the game to be, the higher their attitude to the placed brand and corporate image of the brand manufacturer. In other words, when controlling for perceived interactivity, players experienced more favourable attitudes, but the effect was so weak it probably lacks practical significance. It is suggested that this result can be explained by the existence of excitation transfer, rather than any processing of placed brands. The current research provides strong, empirical evidence that brand and product placements in games do not produce strong attitudinal responses. It appears that the nature of the game medium, game playing experience and product placement impose constraints on gamer motivation, opportunity and ability to process these messages, thereby precluding their impact on attitude to the brand and corporate image. Since this is the first study to investigate the ability of video game and advergame placements to facilitate these deeper consumer responses, further research across different contexts is warranted. Nevertheless, the findings have important theoretical and managerial implications. This investigation makes a number of valuable contributions. First, it is relevant to current marketing practice and presents findings that can help guide promotional strategy decisions. It also presents a comprehensive review of the games industry and associated activities in the marketplace, relevant for marketing practitioners. Theoretically, it contributes new knowledge concerning product placement, including how it should be defined, its classification within the existing communications framework, its dimensions and effects. This is extended to include brand-centred entertainment. The thesis also presents the most comprehensive analysis available in the literature of how placements appear in games. In the consumer behaviour discipline, the research builds on theory concerning attitude formation, through application of MacInnis and Jaworski’s (1989) Integrative Attitude Formation Model. With regards to the games literature, the thesis provides a structured framework for the comparison of games with different media types; it advances understanding of the game medium, its characteristics and the game playing experience; and provides insight into console and handheld games specifically, as well as interactive environments generally. This study is the first to test the effects of interactivity in a game environment, and presents a modified scale that can be used as part of future research. Methodologically, it addresses the limitations of prior research through execution of a field experiment and observation with a large sample, making this the largest study of product placement in games available in the literature. Finally, the current thesis offers comprehensive recommendations that will provide structure and direction for future study in this important field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article provides an overview of the concept of vulnerability through the lens of the U.S. federal regulations for the protection of human subjects of research. General issues that emerge for nurse researchers working with regulated vulnerable populations are identified. Points of current controversy in the application of the regulations and current discourse about vulnerable groups are highlighted. Suggestions for negotiating the tension between federally regulated human subject requirements and the realities of research with vulnerable subjects are given. The limitations of the designation of vulnerable as a protection for human subjects will also be discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, the feasibility of difference imaging for improving the contrast of electronic portal imaging device (EPID) images is investigated. The difference imaging technique consists of the acquisition of two EPID images (with and without the placement of an additional layer of attenuating medium on the surface of the EPID)and the subtraction of one of these images from the other. The resulting difference image shows improved contrast, compared to a standard EPID image, since it is generated by lower-energy photons. Results of this study show that, ¯rstly, this method can produce images exhibiting greater contrast than is seen in standard megavoltage EPID images and that, secondly, the optimal thickness of attenuating material for producing a maximum contrast enhancement may vary with phantom thickness and composition. Further studies of the possibilities and limitations of the di®erence imaging technique, and the physics behind it, are therefore recommended.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For decades, the development, construction, track ownership and operation of mainline railways in China have been overseen by the state-owned authorities. From mid-90’s, the mainline railway management has undergone revamps to revitalize the intra-modal competitiveness of railway transportation and to steer it toward the direction of modern business management. With the rapid economic growth; the large-scale expansion of the mainline network; and the increasing expectation on service, the mainline railways in China require further restructuring. Inevitably, a sustainable approach to ensure business viability and service quality in the next few decades is an imminent challenge. This paper reviews the operations and management of mainline railway in China and discusses the possibility of introducing open access market. Drawing the experiences on railway open markets outside China, the discussions include the need and feasibility of railway open market in China; and the suitability and limitations of different models. Particular considerations will be given to the unique characteristics of the mainline railways in China, where the developments across neighbouring regions are unbalanced; freight and passenger services are of similar demands; and the high-speed train operations are operated with low-speed ones in mixed traffic.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The burden of rising health care expenditures has created a demand for information regarding the clinical and economic outcomes associated with complementary and alternative medicines. Meta-analyses of randomized controlled trials have found Hypericum perforatum preparations to be superior to placebo and similarly effective as standard antidepressants in the acute treatment of mild to moderate depression. A clear advantage over antidepressants has been demonstrated in terms of the reduced frequency of adverse effects and lower treatment withdrawal rates, low rates of side effects and good compliance, key variables affecting the cost-effectiveness of a given form of therapy. The most important risk associated with use is potential interactions with other drugs, but this may be mitigated by using extracts with low hyperforin content. As the indirect costs of depression are greater than five times direct treatment costs, given the rising cost of pharmaceutical antidepressants, the comparatively low cost of Hypericum perforatum extract makes it worthy of consideration in the economic evaluation of mild to moderate depression treatments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is a need for educational frameworks for computer ethics education. This discussion paper presents an approach to developing students’ moral sensitivity, an awareness of morally relevant issues, in project-based learning (PjBL). The proposed approach is based on a study of IT professionals’ levels of awareness of ethics. These levels are labelled My world, The corporate world, A shared world, The client’s world and The wider world. We give recommendations for how instructors may stimulate students’ thinking with the levels and how the levels may be taken into account in managing a project course and in an IS department. Limitations of the recommendations are assessed and issues for discussion are raised.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Drivers are known to be optimistic about their risk of crash involvement, believing that they are less likely to be involved in a crash than other drivers. However, little comparative research has been conducted among other road users. In addition, optimism about crash risk is conceptualised as applying only to an individual’s assessment of his or her personal risk of crash involvement. The possibility that the self-serving nature of optimism about safety might be generalised to the group-level as a cyclist or a pedestrian, i.e., becoming group-serving rather than self-serving, has been overlooked in relation to road safety. This study analysed a subset of data collected as part of a larger research project on the visibility of pedestrians, cyclists and road workers, focusing on a set of questionnaire items administered to 406 pedestrians, 838 cyclists and 622 drivers. The items related to safety in various scenarios involving drivers, pedestrians and cyclists, allowing predictions to be derived about group differences in agreement with items based on the assumption that the results would exhibit group-serving bias. Analysis of the responses indicated that specific hypotheses about group-serving interpretations of safety and responsibility were supported in 22 of the 26 comparisons. When the nine comparisons relevant to low lighting conditions were considered separately, seven were found to be supported. The findings of the research have implications for public education and for the likely acceptance of messages which are inconsistent with current assumptions and expectations of pedestrians and cyclists. They also suggest that research into group-serving interpretations of safety, even for temporary roles rather than enduring groups, could be fruitful. Further, there is an implication that gains in safety can be made by better educating road users about the limitations of their visibility and the ramifications of this for their own road safety, particularly in low light.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose - This paper seeks to examine the complex relationships between urban planning, infrastructure management, sustainable urban development, and to illustrate why there is an urgent need for local governments to develop a robust planning support system which integrates with advance urban computer modelling tools to facilitate better infrastructure management and improve knowledge sharing between the community, urban planners, engineers and decision makers. Design/methodology/approach - The methods used in this paper includes literature review and practical project case observations. Originality/value - This paper provides an insight of how the Brisbane's planning support system established by Brisbane City Council has significantly improved the effectiveness of urban planning, infrastructure management and community engagement through better knowledge management processes. Practical implications - This paper presents a practical framework for setting up a functional planning support system within local government. The integration of the Brisbane Urban Growth model, Virtual Brisbane and the Brisbane Economic Activity Monitoring (BEAM) database have proven initially successful to provide a dynamic platform to assist elected officials, planners and engineers to understand the limitations of the local environment, its urban systems and the planning implications on a city. With the Brisbane's planning support system, planners and decision makers are able to provide better planning outcomes, policy and infrastructure that adequately address the local needs and achieve sustainable spatial forms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates the use of time-frequency techniques to assist in the estimation of power system modes which are resolvable by a Digital Fourier Transform (DFT). The limitations of linear estimation techniques in the presence of large disturbances which excite system non-linearities, particularly the swing equation non-linearity are shown. Where a nonlinearity manifests itself as time varying modal frequencies the Wigner-Ville Distribution (WVD) is used to describe the variation in modal frequencies and construct a window over which standard linear estimation techniques can be used. The error obtained even in the presence of multiple resolvable modes is better than 2%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aims: This exploratory pilot study investigated Mindfulness-based Role-play (MBRP) supervision to find out how therapists would experience the approach, and to what extent they would find it useful, particularly in relation to empathy toward clients. Method: Thirteen therapists participated in a workshop, introducing mindfulness and MBRP supervision, and subsequently had one individual MBRP supervision session. Data collection and analysis: Qualitative data were collected by means of semi-structured interviews and analysed with regard to participants' supervision experiences by means of a modified version of the Consensual Qualitative Research method. Findings: Participants predominantly had positive emotional and cognitive responses to their supervision experiences. The main supervision outcomes were empathy with the client's emotional experience, enhanced awareness of functioning as a therapist, and thoughts about how to proceed in therapy. A subset of participants also reported observed effects in therapy with clients. Conclusions: Even taking into account the methodological limitations of the study, these findings are promising and suggest that further research into the MBRP supervision approach is warranted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: Regeneration of osseous defects by tissue-engineering or cell delivery approach provides a novel means of treatment utilizing cell biology, materials sciences, and molecular biology. The concept of in vitro explanted mesenchymal stem cells (MSCs) with an ability to induce new bone formation has been demonstrated in some small animal models. However, contradictory results have been reported regarding the regenerative capacity of MSCs after ex vivo expansion due to the lack of the understanding of microenvironment for MSC differentiation in vivo. ----- ----- Methods: In our laboratory tissue-derived and bone marrow-derived MSCs have been investigated in their osteogenesis. Cell morphology and proliferation were studied by microscopy, confocal microscopy, FACS and cell counting. Cell differentiation and matrix formation were analysed by matrix staining, quantitative PCR, and immunohistochemistry. A SCID skull defect model was used for cell transplantation studies.----- ----- Results: It was noted that tissue-derived and bone marrow-derived MSCs showed similar characteristics in cell surface marker expression, mesenchymal lineage differentiation potential, and cell population doubling. MSCs from both sources could initiate new bone formation in bone defects after delivery into a critical size defects. The bone forming cells were from both transplanted cells and endogenous cells from the host. Interestingly, the majority of in vitro osteogenic differentiated cells did not form new bone directly even though mineralized matrix was synthesized in vitro by MSCs. Furthermore, no new bone formation was detected when MSCs were transplanted subcutaneously.----- ----- Conclusion: This study unveiled the limitations of MSC delivery in bone regeneration and proposed that in vivo microenvironment needs to be optimized for MSC delivery in osteogenesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although transport related social exclusion has been identified through zonal accessibility measures in the recent past, the debate has shifted from zonal to individual level measures. One way to identify disadvantaged individuals is to measure their size of participation in society (activity spaces). After reviewing existing literature, this paper has found two approaches to measure the activity spaces. One approach is based on the time-geographic potential path area (PPA) concept. The size of the PPA has largely been used as an indicator to the size of potential activity spaces and consequently individual accessibility. The limitations of the PPA concept have been identified in this paper and it is argued cannot be applied as a measure of social exclusion. The other approach is based on individuals’ actual travel activity participation called actual activity spaces. The size of actual activity spaces possesses a good potential measure of social exclusion. However, the indicators to measure the size of actual activity spaces are multidimensional representing the different aspects of social exclusion. The development of a unified approach has therefore been found to be important. This paper has developed a participation index (PI) using the different dimensions of actual activity spaces encountered. A framework has also been developed to operationalise the concept in GIS. The framework, on the one hand, will visualize individuals’ actual travel behaviour in real geographic space; on the other hand, it will calculate the size of their participation in society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.