905 resultados para Algorithms
Resumo:
Monte Carlo (MC) methods are widely used in signal processing, machine learning and stochastic optimization. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information using another MCMC technique working on the entire population of current states. These parallel ?vertical? chains are led by random-walk proposals, whereas the ?horizontal? MCMC uses a independent proposal, which can be easily adapted by making use of all the generated samples. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error, as well as robustness w.r.t. to initial values and parameter choice.
Resumo:
This paper is framed within the problem of analyzing the rationality of the components of two classical geometric constructions, namely the offset and the conchoid to an algebraic plane curve and, in the affirmative case, the actual computation of parametrizations. We recall some of the basic definitions and main properties on offsets (see [13]), and conchoids (see [15]) as well as the algorithms for parametrizing their rational components (see [1] and [16], respectively). Moreover, we implement the basic ideas creating two packages in the computer algebra system Maple to analyze the rationality of conchoids and offset curves, as well as the corresponding help pages. In addition, we present a brief atlas where the offset and conchoids of several algebraic plane curves are obtained, their rationality analyzed, and parametrizations are provided using the created packages.
Resumo:
Genetic algorithms (GA) have been used for the minimization of the aerodynamic drag of a train subject to front wind. The significant importance of the external aerodynamic drag on the total resistance a train experiments as the cruise speed is increased highlights the interest of this study. A complete description of the methodology required for this optimization method is introduced here, where the parameterization of the geometry to be optimized and the metamodel used to speed up the optimization process are detailed. A reduction of about a 25% of the initial aerodynamic drag is obtained in this study, what confirms GA as a proper method for this optimization problem. The evolution of the nose shape is consistent with the literature. The advantage of using metamodels is stressed thanks to the information of the whole design space extracted from it. The influence of each design variable on the objective function is analyzed by means of an ANOVA test.
Resumo:
PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.
Resumo:
Nowadays, devices that monitor the health of structures consume a lot of power and need a lot of time to acquire, process, and send the information about the structure to the main processing unit. To decrease this time, fast electronic devices are starting to be used to accelerate this processing. In this paper some hardware algorithms implemented in an electronic logic programming device are described. The goal of this implementation is accelerate the process and diminish the information that has to be send. By reaching this goal, the time the processor needs for treating all the information is reduced and so the power consumption is reduced too.
Resumo:
La familia de algoritmos de Boosting son un tipo de tcnicas de clasificacin y regresin que han demostrado ser muy eficaces en problemas de Visin Computacional. Tal es el caso de los problemas de deteccin, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y ms popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios ms generales: multiclase, multilabel, con costes, etc. Nuestro inters se centra en extender AdaBoost al terreno de la clasificacin multiclase, considerndolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, est concebido para abordar el problema descomponindolo en subproblemas binarios. Por un lado, usamos una codificacin vectorial para representar etiquetas y, por otro, utilizamos la funcin de prdida exponencial multiclase para evaluar las respuestas. Esta codificacin produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimizacin iterativa del modelo genera un proceso de Boosting asimtrico cuyos costes dependen del nmero de etiquetas separadas por cada clasificador dbil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un mtodo bien fundamentado que extiende de manera cannica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, est concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una funcin de prdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensin ms cannica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estndar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos mtodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas reas. En dichos experimentos se usan bases de datos de referencia en el rea de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Adems, hemos podido aplicar BAdaCost con xito a un proceso de segmentacin, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad terica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.
Resumo:
Uno de los defectos ms frecuentes en los generadores sncronos son los defectos a tierra tanto en el devanado estatrico, como de excitacin. Se produce un defecto cuando el aislamiento elctrico entre las partes activas de cualquiera de estos devanados y tierra se reduce considerablemente o desaparece. La deteccin de los defectos a tierra en ambos devanados es un tema ampliamente estudiado a nivel industrial. Tras la deteccin y confirmacin de la existencia del defecto, dicha falta debe ser localizada a lo largo del devanado para su reparacin, para lo que habitualmente el rotor debe ser extrado del estator. Esta operacin resulta especialmente compleja y cara. Adems, el hecho de limitar la corriente de defecto en ambos devanados provoca que el defecto no sea localizable visualmente, pues apenas existe dao en el generador. Por ello, se deben aplicar tcnicas muy laboriosas para localizar exactamente el defecto y poder as reparar el devanado. De cara a reducir el tiempo de reparacin, y con ello el tiempo en que el generador esta fuera de servicio, cualquier informacin por parte del rel de proteccin acerca de la localizacin del defecto resultara de gran utilidad. El principal objetivo de esta tesis doctoral ha sido el desarrollo de nuevos algoritmos que permitan la estimacin de la localizacin de los defectos a tierra tanto en el devanado rotrico como estatrico de mquinas sncronas. Respecto al devanado de excitacin, se ha presentado un nuevo mtodo de localizacin de defectos a tierra para generadores con excitacin esttica. Este mtodo permite incluso distinguir si el defecto se ha producido en el devanado de excitacin, o en cualquiera de los componentes del sistema de excitacin, esto es, transformador de excitacin, conductores de alimentacin del rectificador controlado, etc. En caso de defecto a tierra en del devanado rotrico, este mtodo proporciona una estimacin de su localizacin. Sin embargo, para poder obtener la localizacin del defecto, se precisa conocer el valor de resistencia de defecto. Por ello, en este trabajo se presenta adems un nuevo mtodo para la estimacin de este parmetro de forma precisa. Finalmente, se presenta un nuevo mtodo de deteccin de defectos a tierra, basado en el criterio direccional, que complementa el mtodo de localizacin, permitiendo tener en cuenta la influencia de las capacidades a tierra del sistema. Estas capacidades resultan determinantes a la hora de localizar el defecto de forma adecuada. En relacin con el devanado estatrico, en esta tesis doctoral se presenta un nuevo algoritmo de localizacin de defectos a tierra para generadores que dispongan de la proteccin de faltas a tierra basada en la inyeccin de baja frecuencia. Se ha propuesto un mtodo general, que tiene en cuenta todos los parmetros del sistema, as como una versin simplificada del mtodo para generadores con capacidades a tierra muy reducida, que podra resultar de fcil implementacin en rels de proteccin comercial. Los algoritmos y mtodos presentados se han validado mediante ensayos experimentales en un generador de laboratorio de 5 kVA, as como en un generador comercial de 106 MVA con resultados satisfactorios y prometedores. ABSTRACT One of the most common faults in synchronous generators is the ground fault in both the stator winding and the excitation winding. In case of fault, the insulation level between the active part of any of these windings and ground lowers considerably, or even disappears. The detection of ground faults in both windings is a very researched topic. The fault current is typically limited intentionally to a reduced level. This allows to detect easily the ground faults, and therefore to avoid damage in the generator. After the detection and confirmation of the existence of a ground fault, it should be located along the winding in order to repair of the machine. Then, the rotor has to be extracted, which is a very complex and expensive operation. Moreover, the fact of limiting the fault current makes that the insulation failure is not visually detectable, because there is no visible damage in the generator. Therefore, some laborious techniques have to apply to locate accurately the fault. In order to reduce the repair time, and therefore the time that the generator is out of service, any information about the approximate location of the fault would be very useful. The main objective of this doctoral thesis has been the development of new algorithms and methods to estimate the location of ground faults in the stator and in the rotor winding of synchronous generators. Regarding the excitation winding, a new location method of ground faults in excitation winding of synchronous machines with static excitation has been presented. This method allows even to detect if the fault is at the excitation winding, or in any other component of the excitation system: controlled rectifier, excitation transformer, etc. In case of ground fault in the rotor winding, this method provides an estimation of the fault location. However, in order to calculate the location, the value of fault resistance is necessary. Therefore, a new fault-resistance estimation algorithm is presented in this text. Finally, a new fault detection algorithm based on directional criterion is described to complement the fault location method. This algorithm takes into account the influence of the capacitance-to-ground of the system, which has a remarkable impact in the accuracy of the fault location. Regarding the stator winding, a new fault-location algorithm has been presented for stator winding of synchronous generators. This algorithm is applicable to generators with ground-fault protection based in low-frequency injection. A general algorithm, which takes every parameter of the system into account, has been presented. Moreover, a simplified version of the algorithm has been proposed for generators with especially low value of capacitance to ground. This simplified algorithm might be easily implementable in protective relays. The proposed methods and algorithms have been tested in a 5 kVA laboratory generator, as well as in a 106 MVA synchronous generator with satisfactory and promising results.
Resumo:
Nowadays, a lot of applications use digital images. For example in face recognition to detect and tag persons in photograph, for security control, and a lot of applications that can be found in smart cities, as speed control in roads or highways and cameras in traffic lights to detect drivers ignoring red light. Also in medicine digital images are used, such as x-ray, scanners, etc. These applications depend on the quality of the image obtained. A good camera is expensive, and the image obtained depends also on external factor as light. To make these applications work properly, image enhancement is as important as, for example, a good face detection algorithm. Image enhancement also can be used in normal photograph, for pictures done in bad light conditions, or just to improve the contrast of an image. There are some applications for smartphones that allow users apply filters or change the bright, colour or contrast on the pictures. This project compares four different techniques to use in image enhancement. After applying one of these techniques to an image, it will use better the whole available dynamic range. Some of the algorithms are designed for grey scale images and others for colour images. It is used Matlab software to develop and present the final results. These algorithms are Successive Means Quantization Transform (SMQT), Histogram Equalization, using Matlab function and own implemented function, and V transform. Finally, as conclusions, we can prove that Histogram equalization algorithm is the simplest of all, it has a wide variability of grey levels and it is not suitable for colour images. V transform algorithm is a good option for colour images. The algorithm is linear and requires low computational power. SMQT algorithm is non-linear, insensitive to gain and bias and it can extract structure of the data. RESUMEN. Hoy en da incontable nmero de aplicaciones usan imgenes digitales. Por ejemplo, para el control de la seguridad se usa el reconocimiento de rostros para detectar y etiquetar personas en fotografas o vdeos, para distintos usos de las ciudades inteligentes, como control de velocidad en carreteras o autopistas, cmaras en los semforos para detectar a conductores haciendo caso omiso de un semforo en rojo, etc. Tambin en la medicina se utilizan imgenes digitales, como por ejemplo, rayos X, escneres, etc. Todas estas aplicaciones dependen de la calidad de la imagen obtenida. Una buena cmara es cara, y la imagen obtenida depende tambin de factores externos como la luz. Para hacer que estas aplicaciones funciones correctamente, el tratamiento de imagen es tan importante como, por ejemplo, un buen algoritmo de deteccin de rostros. La mejora de la imagen tambin se puede utilizar en la fotografa no profesional o de consumo, para las fotos realizadas en malas condiciones de luz, o simplemente para mejorar el contraste de una imagen. Existen aplicaciones para telfonos mviles que permiten a los usuarios aplicar filtros y cambiar el brillo, el color o el contraste en las imgenes. Este proyecto compara cuatro tcnicas diferentes para utilizar el tratamiento de imagen. Se utiliza la herramienta de software matemtico Matlab para desarrollar y presentar los resultados finales. Estos algoritmos son Successive Means Quantization Transform (SMQT), Ecualizacin del histograma, usando la propia funcin de Matlab y una nueva funcin que se desarrolla en este proyecto y, por ltimo, una funcin de transformada V. Finalmente, como conclusin, podemos comprobar que el algoritmo de Ecualizacin del histograma es el ms simple de todos, tiene una amplia variabilidad de niveles de gris y no es adecuado para imgenes en color. El algoritmo de transformada V es una buena opcin para imgenes en color, es lineal y requiere baja potencia de clculo. El algoritmo SMQT no es lineal, insensible a la ganancia y polarizacin y, gracias a l, se puede extraer la estructura de los datos.
Resumo:
La evolucin de los telfonos mviles inteligentes, dotados de cmaras digitales, est provocando una creciente demanda de aplicaciones cada vez ms complejas que necesitan algoritmos de visin artificial en tiempo real; puesto que el tamao de las seales de vdeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo ncleo se ha estancado, los nuevos algoritmos que se diseen para visin artificial han de ser paralelos para poder ejecutarse en mltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores ms interesantes en la actualidad se encuentra en las tarjetas grficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numrico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computacin cientfica. En esta tesis se exploran dos aplicaciones de visin artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelizacin de las distintas subtareas y su implementacin sobre una GPU arrojan los resultados deseados de ejecucin con tasas de refresco interactivas. Asimismo, se propone una tcnica para la evaluacin rpida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicacin de tcnicas de sntesis de imgenes virtuales a partir de nicamente dos cmaras lejanas y no paralelasen contraste con la configuracin habitual en TV 3D de cmaras cercanas y paralelascon informacin de color y profundidad. Empleando filtros de mediana modificados para la elaboracin de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas tcnicas son adecuadas para una libre eleccin del punto de vista. Adems, se demuestra que la codificacin de la informacin de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debera ser evitada. Por otro lado se propone un sistema de deteccin de objetos mviles basado en tcnicas de estimacin de densidad con funciones locales. Este tipo de tcnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimacin dinmica de los anchos de banda de las funciones locales, actualizacin selectiva del modelo de fondo, actualizacin de la posicin de las muestras de referencia del modelo de primer plano empleando un filtro de partculas multirregin y seleccin automtica de regiones de inters para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un mtodo para la aproximacin de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementacin en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cmputo numrico. La propuesta incluye un riguroso anlisis matemtico del error cometido en la aproximacin en funcin del nmero de muestras empleadas, as como un mtodo para la obtencin de una particin cuasiptima del dominio de la funcin para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-imagebased rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
Several basic olfactory tasks must be solved by highly olfactory animals, including background suppression, multiple object separation, mixture separation, and source identification. The large number N of classes of olfactory receptor cellshundreds or thousandspermits the use of computational strategies and algorithms that would not be effective in a stimulus space of low dimension. A model of the patterns of olfactory receptor responses, based on the broad distribution of olfactory thresholds, is constructed. Representing one odor from the viewpoint of another then allows a common description of the most important basic problems and shows how to solve them when N is large. One possible biological implementation of these algorithms uses action potential timing and adaptation as the hardware features that are responsible for effective neural computation.
Resumo:
Postprint
Resumo:
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate yes or no decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science.
Resumo:
Comunicacin presentada en EVACES 2011, 4th International Conference on Experimental Vibration Analysis for Civil Engineering Structures, Varenna (Lecco), Italy, October 3-5, 2011.
Resumo:
Phase equilibrium data regression is an unavoidable task necessary to obtain the appropriate values for any model to be used in separation equipment design for chemical process simulation and optimization. The accuracy of this process depends on different factors such as the experimental data quality, the selected model and the calculation algorithm. The present paper summarizes the results and conclusions achieved in our research on the capabilities and limitations of the existing GE models and about strategies that can be included in the correlation algorithms to improve the convergence and avoid inconsistencies. The NRTL model has been selected as a representative local composition model. New capabilities of this model, but also several relevant limitations, have been identified and some examples of the application of a modified NRTL equation have been discussed. Furthermore, a regression algorithm has been developed that allows for the advisable simultaneous regression of all the condensed phase equilibrium regions that are present in ternary systems at constant T and P. It includes specific strategies designed to avoid some of the pitfalls frequently found in commercial regression tools for phase equilibrium calculations. Most of the proposed strategies are based on the geometrical interpretation of the lowest common tangent plane equilibrium criterion, which allows an unambiguous comprehension of the behavior of the mixtures. The paper aims to show all the work as a whole in order to reveal the necessary efforts that must be devoted to overcome the difficulties that still exist in the phase equilibrium data regression problem.
Resumo:
We present an algorithm to process images of reflected Placido rings captured by a commercial videokeratoscope. Raw data are obtained with no Cartesian-to-polar-coordinate conversion, thus avoiding interpolation and associated numerical artifacts. The method provides a characteristic equation for the device and is able to process around 6 times more corneal data than the commercial software. Our proposal allows complete control over the whole process from the capture of corneal images until the computation of curvature radii.