966 resultados para Front-Tracking Method
Resumo:
To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.
Resumo:
Dissertação de mestrado em Média Interativos
Resumo:
Since the times preceding the Second World War the subject of aircraft tracking has been a core interest to both military and non-military aviation. During subsequent years both technology and configuration of the radars allowed the users to deploy it in numerous fields, such as over-the-horizon radar, ballistic missile early warning systems or forward scatter fences. The latter one was arranged in a bistatic configuration. The bistatic radar has continuously re-emerged over the last eighty years for its intriguing capabilities and challenging configuration and formulation. The bistatic radar arrangement is used as the basis of all the analyzes presented in this work. The aircraft tracking method of VHF Doppler-only information, developed in the first part of this study, is solely based on Doppler frequency readings in relation to time instances of their appearance. The corresponding inverse problem is solved by utilising a multistatic radar scenario with two receivers and one transmitter and using their frequency readings as a base for aircraft trajectory estimation. The quality of the resulting trajectory is then compared with ground-truth information based on ADS-B data. The second part of the study deals with the developement of a method for instantaneous Doppler curve extraction from within a VHF time-frequency representation of the transmitted signal, with a three receivers and one transmitter configuration, based on a priori knowledge of the probability density function of the first order derivative of the Doppler shift, and on a system of blocks for identifying, classifying and predicting the Doppler signal. The extraction capabilities of this set-up are tested with a recorded TV signal and simulated synthetic spectrograms. Further analyzes are devoted to more comprehensive testing of the capabilities of the extraction method. Besides testing the method, the classification of aircraft is performed on the extracted Bistatic Radar Cross Section profiles and the correlation between them for different types of aircraft. In order to properly estimate the profiles, the ADS-B aircraft location information is adjusted based on extracted Doppler frequency and then used for Bistatic Radar Cross Section estimation. The classification is based on seven types of aircraft grouped by their size into three classes.
Resumo:
Data from four recent reanalysis projects [ECMWF, NCEP-NCAR, NCEP - Department of Energy ( DOE), NASA] have been diagnosed at the scale of synoptic weather systems using an objective feature tracking method. The tracking statistics indicate that, overall, the reanalyses correspond very well in the Northern Hemisphere (NH) lower troposphere, although differences for the spatial distribution of mean intensities show that the ECMWF reanalysis is systematically stronger in the main storm track regions but weaker around major orographic features. A direct comparison of the track ensembles indicates a number of systems with a broad range of intensities that compare well among the reanalyses. In addition, a number of small-scale weak systems are found that have no correspondence among the reanalyses or that only correspond upon relaxing the matching criteria, indicating possible differences in location and/or temporal coherence. These are distributed throughout the storm tracks, particularly in the regions known for small-scale activity, such as secondary development regions and the Mediterranean. For the Southern Hemisphere (SH), agreement is found to be generally less consistent in the lower troposphere with significant differences in both track density and mean intensity. The systems that correspond between the various reanalyses are considerably reduced and those that do not match span a broad range of storm intensities. Relaxing the matching criteria indicates that there is a larger degree of uncertainty in both the location of systems and their intensities compared with the NH. At upper-tropospheric levels, significant differences in the level of activity occur between the ECMWF reanalysis and the other reanalyses in both the NH and SH winters. This occurs due to a lack of coherence in the apparent propagation of the systems in ERA15 and appears most acute above 500 hPa. This is probably due to the use of optimal interpolation data assimilation in ERA15. Also shown are results based on using the same techniques to diagnose the tropical easterly wave activity. Results indicate that the wave activity is sensitive not only to the resolution and assimilation methods used but also to the model formulation.
Resumo:
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Indoor positioning has become an emerging research area because of huge commercial demands for location-based services in indoor environments. Channel State Information (CSI) as a fine-grained physical layer information has been recently proposed to achieve high positioning accuracy by using range-based methods, e.g., trilateration. In this work, we propose to fuse the CSI-based ranges and velocity estimated from inertial sensors by an enhanced particle filter to achieve highly accurate tracking. The algorithm relies on some enhanced ranging methods and further mitigates the remaining ranging errors by a weighting technique. Additionally, we provide an efficient method to estimate the velocity based on inertial sensors. The algorithms are designed in a network-based system, which uses rather cheap commercial devices as anchor nodes. We evaluate our system in a complex environment along three different moving paths. Our proposed tracking method can achieve 1:3m for mean accuracy and 2:2m for 90% accuracy, which is more accurate and stable than pedestrian dead reckoning and range-based positioning.
Resumo:
Indoor positioning has become an emerging research area because of huge commercial demands for location-based services in indoor environments. Channel State Information (CSI) as fine-grained physical layer information has been recently proposed to achieve high positioning accuracy by using range based methods, e.g., trilateration. In this work, we propose to fuse the CSI-based ranging and velocity estimated from inertial sensors by an enhanced particle filter to achieve highly accurate tracking. The algorithm relies on some enhanced ranging methods and further mitigates the remaining ranging errors by a weighting technique. Additionally, we provide an efficient method to estimate the velocity based on inertial sensors. The algorithms are designed in a network-based system, which uses rather cheap commercial devices as anchor nodes. We evaluate our system in a complex environment along three different moving paths. Our proposed tracking method can achieve 1.3m for mean accuracy and 2.2m for 90% accuracy, which is more accurate and stable than pedestrian dead reckoning and range-based positioning.
Resumo:
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Resumo:
Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.
Resumo:
Los sistemas de seguimiento mono-cámara han demostrado su notable capacidad para el análisis de trajectorias de objectos móviles y para monitorización de escenas de interés; sin embargo, tanto su robustez como sus posibilidades en cuanto a comprensión semántica de la escena están fuertemente limitadas por su naturaleza local y monocular, lo que los hace insuficientes para aplicaciones realistas de videovigilancia. El objetivo de esta tesis es la extensión de las posibilidades de los sistemas de seguimiento de objetos móviles para lograr un mayor grado de robustez y comprensión de la escena. La extensión propuesta se divide en dos direcciones separadas. La primera puede considerarse local, ya que está orientada a la mejora y enriquecimiento de las posiciones estimadas para los objetos móviles observados directamente por las cámaras del sistema; dicha extensión se logra mediante el desarrollo de un sistema multi-cámara de seguimiento 3D, capaz de proporcionar consistentemente las posiciones 3D de múltiples objetos a partir de las observaciones capturadas por un conjunto de sensores calibrados y con campos de visión solapados. La segunda extensión puede considerarse global, dado que su objetivo consiste en proporcionar un contexto global para relacionar las observaciones locales realizadas por una cámara con una escena de mucho mayor tamaño; para ello se propone un sistema automático de localización de cámaras basado en las trayectorias observadas de varios objetos móviles y en un mapa esquemático de la escena global monitorizada. Ambas líneas de investigación se tratan utilizando, como marco común, técnicas de estimación bayesiana: esta elección está justificada por la versatilidad y flexibilidad proporcionada por dicho marco estadístico, que permite la combinación natural de múltiples fuentes de información sobre los parámetros a estimar, así como un tratamiento riguroso de la incertidumbre asociada a las mismas mediante la inclusión de modelos de observación específicamente diseñados. Además, el marco seleccionado abre grandes posibilidades operacionales, puesto que permite la creación de diferentes métodos numéricos adaptados a las necesidades y características específicas de distintos problemas tratados. El sistema de seguimiento 3D con múltiples cámaras propuesto está específicamente diseñado para permitir descripciones esquemáticas de las medidas realizadas individualmente por cada una de las cámaras del sistema: esta elección de diseño, por tanto, no asume ningún algoritmo específico de detección o seguimiento 2D en ninguno de los sensores de la red, y hace que el sistema propuesto sea aplicable a redes reales de vigilancia con capacidades limitadas tanto en términos de procesamiento como de transmision. La combinación robusta de las observaciones capturadas individualmente por las cámaras, ruidosas, incompletas y probablemente contaminadas por falsas detecciones, se basa en un metodo de asociación bayesiana basado en geometría y color: los resultados de dicha asociación permiten el seguimiento 3D de los objetos de la escena mediante el uso de un filtro de partículas. El sistema de fusión de observaciones propuesto tiene, como principales características, una gran precisión en términos de localización 3D de objetos, y una destacable capacidad de recuperación tras eventuales errores debidos a un número insuficiente de datos de entrada. El sistema automático de localización de cámaras se basa en la observación de múltiples objetos móviles y un mapa esquemático de las áreas transitables del entorno monitorizado para inferir la posición absoluta de dicho sensor. Para este propósito, se propone un novedoso marco bayesiano que combina modelos dinámicos inducidos por el mapa en los objetos móviles presentes en la escena con las trayectorias observadas por la cámara, lo que representa un enfoque nunca utilizado en la literatura existente. El sistema de localización se divide en dos sub-tareas diferenciadas, debido a que cada una de estas tareas requiere del diseño de algoritmos específicos de muestreo para explotar en profundidad las características del marco desarrollado: por un lado, análisis de la ambigüedad del caso específicamente tratado y estimación aproximada de la localización de la cámara, y por otro, refinado de la localización de la cámara. El sistema completo, diseñado y probado para el caso específico de localización de cámaras en entornos de tráfico urbano, podría tener aplicación también en otros entornos y sensores de diferentes modalidades tras ciertas adaptaciones. ABSTRACT Mono-camera tracking systems have proved their capabilities for moving object trajectory analysis and scene monitoring, but their robustness and semantic possibilities are strongly limited by their local and monocular nature and are often insufficient for realistic surveillance applications. This thesis is aimed at extending the possibilities of moving object tracking systems to a higher level of scene understanding. The proposed extension comprises two separate directions. The first one is local, since is aimed at enriching the inferred positions of the moving objects within the area of the monitored scene directly covered by the cameras of the system; this task is achieved through the development of a multi-camera system for robust 3D tracking, able to provide 3D tracking information of multiple simultaneous moving objects from the observations reported by a set of calibrated cameras with semi-overlapping fields of view. The second extension is global, as is aimed at providing local observations performed within the field of view of one camera with a global context relating them to a much larger scene; to this end, an automatic camera positioning system relying only on observed object trajectories and a scene map is designed. The two lines of research in this thesis are addressed using Bayesian estimation as a general unifying framework. Its suitability for these two applications is justified by the flexibility and versatility of that stochastic framework, which allows the combination of multiple sources of information about the parameters to estimate in a natural and elegant way, addressing at the same time the uncertainty associated to those sources through the inclusion of models designed to this end. In addition, it opens multiple possibilities for the creation of different numerical methods for achieving satisfactory and efficient practical solutions to each addressed application. The proposed multi-camera 3D tracking method is specifically designed to work on schematic descriptions of the observations performed by each camera of the system: this choice allows the use of unspecific off-the-shelf 2D detection and/or tracking subsystems running independently at each sensor, and makes the proposal suitable for real surveillance networks with moderate computational and transmission capabilities. The robust combination of such noisy, incomplete and possibly unreliable schematic descriptors relies on a Bayesian association method, based on geometry and color, whose results allow the tracking of the targets in the scene with a particle filter. The main features exhibited by the proposal are, first, a remarkable accuracy in terms of target 3D positioning, and second, a great recovery ability after tracking losses due to insufficient input data. The proposed system for visual-based camera self-positioning uses the observations of moving objects and a schematic map of the passable areas of the environment to infer the absolute sensor position. To this end, a new Bayesian framework combining trajectory observations and map-induced dynamic models for moving objects is designed, which represents an approach to camera positioning never addressed before in the literature. This task is divided into two different sub-tasks, setting ambiguity analysis and approximate position estimation, on the one hand, and position refining, on the other, since they require the design of specific sampling algorithms to correctly exploit the discriminative features of the developed framework. This system, designed for camera positioning and demonstrated in urban traffic environments, can also be applied to different environments and sensors of other modalities after certain required adaptations.
Resumo:
Aircraft tracking plays a key and important role in the Sense-and-Avoid system of Unmanned Aerial Vehicles (UAVs). This paper presents a novel robust visual tracking algorithm for UAVs in the midair to track an arbitrary aircraft at real-time frame rates, together with a unique evaluation system. This visual algorithm mainly consists of adaptive discriminative visual tracking method, Multiple-Instance (MI) learning approach, Multiple-Classifier (MC) voting mechanism and Multiple-Resolution (MR) representation strategy, that is called Adaptive M3 tracker, i.e. AM3. In this tracker, the importance of test sample has been integrated to improve the tracking stability, accuracy and real-time performances. The experimental results show that this algorithm is more robust, efficient and accurate against the existing state-of-art trackers, overcoming the problems generated by the challenging situations such as obvious appearance change, variant surrounding illumination, partial aircraft occlusion, blur motion, rapid pose variation and onboard mechanical vibration, low computation capacity and delayed information communication between UAVs and Ground Station (GS). To our best knowledge, this is the first work to present this tracker for solving online learning and tracking freewill aircraft/intruder in the UAVs.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.