969 resultados para Coordinates.
Resumo:
To navigate successfully in a previously unexplored environment, a mobile robot must be able to estimate the spatial relationships of the objects of interest accurately. A Simultaneous Localization and Mapping (SLAM) sys- tem employs its sensors to build incrementally a map of its surroundings and to localize itself in the map simultaneously. The aim of this research project is to develop a SLAM system suitable for self propelled household lawnmowers. The proposed bearing-only SLAM system requires only an omnidirec- tional camera and some inexpensive landmarks. The main advantage of an omnidirectional camera is the panoramic view of all the landmarks in the scene. Placing landmarks in a lawn field to define the working domain is much easier and more flexible than installing the perimeter wire required by existing autonomous lawnmowers. The common approach of existing bearing-only SLAM methods relies on a motion model for predicting the robot’s pose and a sensor model for updating the pose. In the motion model, the error on the estimates of object positions is cumulated due mainly to the wheel slippage. Quantifying accu- rately the uncertainty of object positions is a fundamental requirement. In bearing-only SLAM, the Probability Density Function (PDF) of landmark position should be uniform along the observed bearing. Existing methods that approximate the PDF with a Gaussian estimation do not satisfy this uniformity requirement. This thesis introduces both geometric and proba- bilistic methods to address the above problems. The main novel contribu- tions of this thesis are: 1. A bearing-only SLAM method not requiring odometry. The proposed method relies solely on the sensor model (landmark bearings only) without relying on the motion model (odometry). The uncertainty of the estimated landmark positions depends on the vision error only, instead of the combination of both odometry and vision errors. 2. The transformation of the spatial uncertainty of objects. This thesis introduces a novel method for translating the spatial un- certainty of objects estimated from a moving frame attached to the robot into the global frame attached to the static landmarks in the environment. 3. The characterization of an improved PDF for representing landmark position in bearing-only SLAM. The proposed PDF is expressed in polar coordinates, and the marginal probability on range is constrained to be uniform. Compared to the PDF estimated from a mixture of Gaussians, the PDF developed here has far fewer parameters and can be easily adopted in a probabilistic framework, such as a particle filtering system. The main advantages of our proposed bearing-only SLAM system are its lower production cost and flexibility of use. The proposed system can be adopted in other domestic robots as well, such as vacuum cleaners or robotic toys when terrain is essentially 2D.
Resumo:
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
Resumo:
Computer vision is much more than a technique to sense and recover environmental information from an UAV. It should play a main role regarding UAVs’ functionality because of the big amount of information that can be extracted, its possible uses and applications, and its natural connection to human driven tasks, taking into account that vision is our main interface to world understanding. Our current research’s focus lays on the development of techniques that allow UAVs to maneuver in spaces using visual information as their main input source. This task involves the creation of techniques that allow an UAV to maneuver towards features of interest whenever a GPS signal is not reliable or sufficient, e.g. when signal dropouts occur (which usually happens in urban areas, when flying through terrestrial urban canyons or when operating on remote planetary bodies), or when tracking or inspecting visual targets—including moving ones—without knowing their exact UMT coordinates. This paper also investigates visual serving control techniques that use velocity and position of suitable image features to compute the references for flight control. This paper aims to give a global view of the main aspects related to the research field of computer vision for UAVs, clustered in four main active research lines: visual serving and control, stereo-based visual navigation, image processing algorithms for detection and tracking, and visual SLAM. Finally, the results of applying these techniques in several applications are presented and discussed: this study will encompass power line inspection, mobile target tracking, stereo distance estimation, mapping and positioning.
Resumo:
Cibachrome photographic prints, Digital print, Acrylic Sheet , wire & card.--------- Total size: 170cm x 119cm x 15cm---------- The work employs a terrestrial photography and cartographic technique to represent a diversity of landscape types surrounding West Mt Barren in Western Australia. Sixteen views of West Mt Barren were captured over a two week period from map grid coordinates in the surround landscape. These images are configured in Many Mount Barrens in a manner which corresponds to the position from which they were recorded.---------- The work was exhibited at Eyes wide open (curators K Bradby and A Brandenberg) which was held at Gorepani Gallery, Albany WA to coincide with the first national screening on SBS Television of the Western Australian documentary A Million Acres a Year (Rijavec, F, Harrison, N & Bradby K (directors), Snakewood Films & Film Australia, c2003).
Resumo:
Previous work has shown that amplitude and direction are two independently controlled parameters of aimed arm movements, and performance, therefore, suffers when they must be decomposed into Cartesian coordinates. We now compare decomposition into different coordinate systems. Subjects pointed at visual targets in 2-D with a cursor, using a two-axis joystick or two single-axis joysticks. In the latter case, joystick axes were aligned with the subjects’ body axes, were rotated by –45°, or were oblique (i.e., one axis was in an egocentric frame and the other was rotated by –45°). Cursor direction always corresponded to joystick direction. We found that compared with the two-axis joystick, responses with single-axis joysticks were slower and less accurate when the axes were oriented egocentrically; the deficit was even more pronounced when the axes were rotated and was most pronounced when they were oblique. This confirms that decomposition of motor commands is computationally demanding and documents that this demand is lowest for egocentric, higher for rotated, and highest for oblique coordinates. We conclude that most current vehicles use computationally demanding man–machine interfaces.
Resumo:
This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.
Resumo:
This paper presents a formulation of image-based visual servoing (IBVS) for a spherical camera where coordinates are parameterized in terms of colatitude and longitude: IBVSSph. The image Jacobian is derived and simulation results are presented for canonical rotational, translational as well as general motion. Problems with large rotations that affect the planar perspective form of IBVS are not present on the sphere, whereas the desirable robustness properties of IBVS are shown to be retained. We also describe a structure from motion (SfM) system based on camera-centric spherical coordinates and show how a recursive estimator can be used to recover structure. The spherical formulations for IBVS and SfM are particularly suitable for platforms, such as aerial and underwater robots, that move in SE(3).
Resumo:
This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network. A simple handheld device, using this same environmental infrastructure, is used to guide humans.
Resumo:
This paper proposes a train movement model with fixed runtime that can be employed to find feasible control strategies for a single train along an inter-city railway line. The objective of the model is to minimize arrival delays at each station along railway lines. However, train movement is a typical nonlinear problem for complex running environments and different requirements. A heuristic algorithm is developed to solve the problem in this paper and the simulation results show that the train could overcome the disturbance from train delay and coordinates the operation strategies to sure punctual arrival of trains at the destination. The developed algorithm can also be used to evaluate the running reliability of trains in scheduled timetables.
Resumo:
Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Maps are used to represent three-dimensional space and are integral to a range of everyday experiences. They are increasingly used in mathematics, being prominent both in school curricula and as a form of assessing students understanding of mathematics ideas. In order to successfully interpret maps, students need to be able to understand that maps: represent space, have their own perspective and scale, and their own set of symbols and texts. Despite the fact that maps have an increased prevalence in society and school, there is evidence to suggest that students have difficulty interpreting maps. This study investigated 43 primary-aged students’ (aged 9-12 years) verbal and gestural behaviours as they engaged with and solved map tasks. Within a multiliteracies framework that focuses on spatial, visual, linguistic, and gestural elements, the study investigated how students interpret map tasks. Specifically, the study sought to understand students’ skills and approaches used to solving map tasks and the gestural behaviours they utilised as they engaged with map tasks. The investigation was undertaken using the Knowledge Discovery in Data (KDD) design. The design of this study capitalised on existing research data to carry out a more detailed analysis of students’ interpretation of map tasks. Video data from an existing data set was reorganised according to two distinct episodes—Task Solution and Task Explanation—and analysed within the multiliteracies framework. Content Analysis was used with these data and through anticipatory data reduction techniques, patterns of behaviour were identified in relation to each specific map task by looking at task solution, task correctness and gesture use. The findings of this study revealed that students had a relatively sound understanding of general mapping knowledge such as identifying landmarks, using keys, compass points and coordinates. However, their understanding of mathematical concepts pertinent to map tasks including location, direction, and movement were less developed. Successful students were able to interpret the map tasks and apply relevant mathematical understanding to navigate the spatial demands of the map tasks while the unsuccessful students were only able to interpret and understand basic map conventions. In terms of their gesture use, the more difficult the task, the more likely students were to exhibit gestural behaviours to solve the task. The most common form of gestural behaviour was deictic, that is a pointing gesture. Deictic gestures not only aided the students capacity to explain how they solved the map tasks but they were also a tool which assisted them to navigate and monitor their spatial movements when solving the tasks. There were a number of implications for theory, learning and teaching, and test and curriculum design arising from the study. From a theoretical perspective, the findings of the study suggest that gesturing is an important element of multimodal engagement in mapping tasks. In terms of teaching and learning, implications include the need for students to utilise gesturing techniques when first faced with new or novel map tasks. As students become more proficient in solving such tasks, they should be encouraged to move beyond a reliance on such gesture use in order to progress to more sophisticated understandings of map tasks. Additionally, teachers need to provide students with opportunities to interpret and attend to multiple modes of information when interpreting map tasks.
Resumo:
A solvothermal route for the preparation of crystalline state lithium niobate using Li2 CO3 and Nb2 O5 is developed. Oxalic acid is employed as solvent, which coordinates with niobium oxide to stimulate the main reaction. Scanning electron microscopy images show that the as-prepared sample displays a cubic morphology. X-ray diffraction and IR spectrum of the as-prepared sample indicate that the sample is well crystalline.
Resumo:
To analyse mechanotransduction resulting from tensile loading under defined conditions, various devices for in vitro cell stimulation have been developed. This work aimed to determine the strain distribution on the membrane of a commercially available device and its consistency with rising cycle numbers, as well as the amount of strain transferred to adherent cells. The strains and their behaviour within the stimulation device were determined using digital image correlation (DIC). The strain transferred to cells was measured on eGFP-transfected bone marrow-derived cells imaged with a fluorescence microscope. The analysis was performed by determining the coordinates of prominent positions on the cells, calculating vectors between the coordinates and their length changes with increasing applied tensile strain. The stimulation device was found to apply homogeneous (mean of standard deviations approx. 2% of mean strain) and reproducible strains in the central well area. However, on average, only half of the applied strain was transferred to the bone marrow-derived cells. Furthermore, the strain measured within the device increased significantly with an increasing number of cycles while the membrane's Young's modulus decreased, indicating permanent changes in the material during extended use. Thus, strain magnitudes do not match the system readout and results require careful interpretation, especially at high cycle numbers.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.