971 resultados para Flow process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed to investigate the effects of pectinase enzyme treatment of acai pulp on cross-flow microfiltration (CFMF) performance and on phytochemical and functional characteristics of their compounds. Analyses of fouling mechanisms were carried out through resistance in series and blocking in law models. The enzymatic treatment was conducted using Ultrazym(R) AFPL (Novozymes A/S) at 500 mg kg(-1) of acai pulp for 30 min at 35 degrees C. Before microfiltrations, untreated and enzyme-treated acai pulps were previously diluted in distilled water (1:3; w/v). CFMFs were conducted using commercial alpha-alumina (alpha-Al2O3) ceramic membranes (Andritz AG, Austria) of 0.2 mu m and 0.8 mu m pore sizes, and 0.0047 m(2) of filtration area. The microfiltration unit was operated in batch mode for 120 min at 25 degrees C and the fluid-dynamic conditions were transmembrane pressure of Delta P = 100 kPa and cross-flow velocity of 3 m s(-1) in turbulent flow. The highest values of permeate flux and accumulated permeate volume were obtained using enzyme-treated pulp and 0.2 mu m pore size membranes with steady flux values exceeding 100 L h(-1) m(-2). For the 0.8 mu m pore size membrane, the estimated total resistance after the microfiltration of enzyme-treated acai pulp was 21% lower than the untreated pulp, and for the 0.2 mu m pore size membrane, it was 18%. Cake filtration was the dominant mechanism in the early stages of most of the CFMF processes. After approximately 20 min, however, intermediate pore blocking and complete pore blocking contributed to the overall fouling mechanisms. The reduction of the antioxidant capacity of the permeates obtained after microfiltration of the enzyme-treated pulp was higher (p < 0.01) than that obtained using untreated pulp. For total polyphenols, on the contrary, the permeates obtained after microfiltration of the enzyme-treated pulp showed a lower mean reduction (p < 0.01) than those from the untreated pulp. The results show that the enzymatic treatment had a positive effect on the CFMF process of acai pulp. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous enzymatic interesterification is an alternative to chemical interesterification for lipid modification technology which is economically viable for large scale use. A blend of 70% lard and 30% soybean oil was submitted to continuous enzymatic interesterification in a glass tubular bioreactor at flow rate ranging from 0.5 to 4.5 mL/min. The original mixture and the reaction products obtained were examined to determine melting and crystallization behavior by DSC, and analyzed for regiospecific fatty acid distribution. Continuous enzymatic interesterification changed the mixture, forming a new triacylglycerol composition, verified by DSC curves and variation in enthalpy of melting values. The regiospecific distribution of fatty acids was changed by flow variations in the reactor. In the continuous enzymatic interesterification reaction the flow rate of 4.5 mL/min, was more advantageous than slower flow rates, reducing acyl migration and increasing process productivity. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To compare the effects of glimepiride and metformin on vascular reactivity, hemostatic factors and glucose and lipid profiles in patients with type 2 diabetes. METHODS: A prospective study was performed in 16 uncontrolled patients with diabetes previously treated with dietary intervention. The participants were randomized into metformin or glimepiride therapy groups. After four months, the patients were crossed over with no washout period to the alternative treatment for an additional four-month period on similar dosage schedules. The following variables were assessed before and after four months of each treatment: 1) fasting glycemia, insulin, catecholamines, lipid profiles and HbA(1) levels; 2) t-PA and PAI-1 (antigen and activity), platelet aggregation and fibrinogen and plasminogen levels; and 3) the flow indices of the carotid and brachial arteries. In addition, at the end of each period, a 12-hour metabolic profile was obtained after fasting and every 2 hours thereafter. RESULTS: Both therapies resulted in similar decreases in fasting glucose, triglyceride and norepinephrine levels, and they increased the fibrinolytic factor plasminogen but decreased t-PA activity. Metformin caused lower insulin and pro-insulin levels and higher glucagon levels and increased systolic carotid diameter and blood flow. Neither metformin nor glimepiride affected endothelial-dependent or endothelial-independent vasodilation of the brachial artery. CONCLUSIONS: Glimepiride and metformin were effective in improving glucose and lipid profiles and norepinephrine levels. Metformin afforded more protection against macrovascular diabetes complications, increased systolic carotid artery diameter and total and systolic blood flow, and decreased insulin levels. As both therapies increased plasminogen levels but reduced t-PA activity, a coagulation process was likely still ongoing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solar reactors can be attractive in photodegradation processes due to lower electrical energy demand. The performance of a solar reactor for two flow configurations, i.e., plug flow and mixed flow, is compared based on experimental results with a pilot-scale solar reactor. Aqueous solutions of phenol were used as a model for industrial wastewater containing organic contaminants. Batch experiments were carried out under clear sky, resulting in removal rates in the range of 96100?%. The dissolved organic carbon removal rate was simulated by an empirical model based on neural networks, which was adjusted to the experimental data, resulting in a correlation coefficient of 0.9856. This approach enabled to estimate effects of process variables which could not be evaluated from the experiments. Simulations with different reactor configurations indicated relevant aspects for the design of solar reactors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of carbonate reservoir is oil-wet, which is an unfavorable condition for oil production. Generally, the total oil recovery after both primary and secondary recovery in an oil-wet reservoir is low. The amount of producible oil by enhanced oil recovery techniques is still large. Alkali substances are proven to be able to reverse rock wettability from oil-wet to water-wet, which is a favorable condition for oil production. However, the wettability reversal mechanism would require a noneconomical aging period to reach the maximum reversal condition. An intermittent flow with the optimum pausing period is then combined with alkali flooding (combination technique) to increase the wettability reversal mechanism and as a consequence, oil recovery is improved. The aims of this study are to evaluate the efficiency of the combination technique and to study the parameters that affect this method. In order to implement alkali flooding, reservoir rock and fluid properties were gathered, e.g. interfacial tension of fluids, rock wettability, etc. The flooding efficiency curves are obtained from core flooding and used as a major criterion for evaluation the performance of technique. The combination technique improves oil recovery when the alkali concentration is lower than 1% wt. (where the wettability reversal mechanism is dominant). The soap plug (that appears when high alkali concentration is used) is absent in this combination as seen from no drop of production rate. Moreover, the use of low alkali concentration limits alkali loss. This combination probably improves oil recovery also in the fractured carbonate reservoirs in which oil is uneconomically produced. The results from the current study indicate that the combination technique is an option that can improve the production of carbonate reservoirs. And a less quantity of alkali is consumed in the process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerosol particles and water vapour are two important constituents of the atmosphere. Their interaction, i.e. thecondensation of water vapour on particles, brings about the formation of cloud, fog, and raindrops, causing the water cycle on the earth, and being responsible for climate changes. Understanding the roles of water vapour and aerosol particles in this interaction has become an essential part of understanding the atmosphere. In this work, the heterogeneous nucleation on pre-existing aerosol particles by the condensation of water vapour in theflow of a capillary nozzle was investigated. Theoretical and numerical modelling as well as experiments on thiscondensation process were included. Based on reasonable results from the theoretical and numerical modelling, an idea of designing a new nozzle condensation nucleus counter (Nozzle-CNC), that is to utilise the capillary nozzle to create an expanding water saturated air flow, was then put forward and various experiments were carried out with this Nozzle-CNC under different experimental conditions. Firstly, the air stream in the long capillary nozzle with inner diameter of 1.0~mm was modelled as a steady, compressible and heat-conducting turbulence flow by CFX-FLOW3D computational program. An adiabatic and isentropic cooling in the nozzle was found. A supersaturation in the nozzle can be created if the inlet flow is water saturated, and its value depends principally on flow velocity or flow rate through the nozzle. Secondly, a particle condensational growth model in air stream was developed. An extended Mason's diffusion growthequation with size correction for particles beyond the continuum regime and with the correction for a certain particle Reynolds number in an accelerating state was given. The modelling results show the rapid condensational growth of aerosol particles, especially for fine size particles, in the nozzle stream, which, on the one hand, may induce evident `over-sizing' and `over-numbering' effects in aerosol measurements as nozzle designs are widely employed for producing accelerating and focused aerosol beams in aerosol instruments like optical particle counter (OPC) and aerodynamical particle sizer (APS). It can, on the other hand, be applied in constructing the Nozzle-CNC. Thirdly, based on the optimisation of theoretical and numerical results, the new Nozzle-CNC was built. Under various experimental conditions such as flow rate, ambient temperature, and the fraction of aerosol in the total flow, experiments with this instrument were carried out. An interesting exponential relation between the saturation in the nozzle and the number concentration of atmospheric nuclei, including hygroscopic nuclei (HN), cloud condensation nuclei (CCN), and traditionally measured atmospheric condensation nuclei (CN), was found. This relation differs from the relation for the number concentration of CCN obtained by other researchers. The minimum detectable size of this Nozzle-CNC is 0.04?m. Although further improvements are still needed, this Nozzle-CNC, in comparison with other CNCs, has severaladvantages such as no condensation delay as particles larger than the critical size grow simultaneously, low diffusion losses of particles, little water condensation at the inner wall of the instrument, and adjustable saturation --- therefore the wide counting region, as well as no calibration compared to non-water condensation substances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a generally applicable method for the preparation of mucoadhesive micropellets of 250 to 600µm diameter is presented using rotor processing without the use of electrolytes. The mucoadhesive micropellets were developed to combine the advantages of mucoadhesion and microparticles. It was possible to produce mucoadhesive micropellets based on different mucoadhesive polymers Na-CMC, Na-alginate and chitosan. These micropellets are characterized by a lower friability (6 to 17%) when compared to industrial produced cellulose pellets (Cellets®) (41.5%). They show great tapped density and can be manufactured at high yields. The most influencing variables of the process are the water content at the of the end spraying period, determined by the liquid binder amount, the spraying rate, the inlet air temperature, the airflow and the humidity of the inlet air and the addition of the liquid binder, determined by the spraying rate, the rotor speed and the type of rotor disc. In a subsequent step a fluidized bed coating process was developed. It was possible to manifest a stable process in the Hüttlin Mycrolab® in contrast to the Mini-Glatt® apparatus. To reach enteric resistance, a 70% coating for Na-CMC micropellets, an 85% for chitosan micropellets and a 140% for Na-alginate micropellets, based on the amount of the starting micropellets, was necessary. Comparative dissolution experiments of the mucoadhesive micropellets were performed using the paddle apparatus with and without a sieve inlay, the basket apparatus, the reciprocating cylinder and flow-through cell. The paddle apparatus and the modified flow-through cell method turned out to be successful methods for the dissolution of mucoadhesive micropellets. All dissolution profiles showed an initial burst release followed by a slow release due to diffusion control. Depending on the method, the dissolution profiles changed from immediate release to slow release. The dissolution rate in the paddle apparatus was mainly influenced by the agitation rate whereas the flow-through cell pattern was mainly influenced by the particle size. Also, the logP and the HLB values of different emulsifiers were correlated to transfer HLB values of excipients into logP values and logP values of API´s into HLB values. These experiments did not show promising results. Finally, it was shown that manufacture of mucoadhesive micropellets is successful resulting in product being characterized by enteric resistency combined with high yields and convincing morphology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The meaning of a place has been commonly assigned to the quality of having root (rootedness) or sense of belonging to that setting. While on the contrary, people are nowadays more concerned with the possibilities of free moving and networks of communication. So, the meaning, as well as the materiality of architecture has been dramatically altered with these forces. It is therefore of significance to explore and redefine the sense and the trend of architecture at the age of flow. In this dissertation, initially, we review the gradually changing concept of "place-non-place" and its underlying technological basis. Then we portray the transformation of meaning of architecture as influenced by media and information technology and advanced methods of mobility, in the dawn of 21st century. Against such backdrop, there is a need to sort and analyze architectural practices in response to the triplet of place-non-place and space of flow, which we plan to achieve conclusively. We also trace the concept of flow in the process of formation and transformation of old cities. As a brilliant case study, we look at Persian Bazaar from a socio-architectural point of view. In other word, based on Robert Putnam's theory of social capital, we link social context of the Bazaar with architectural configuration of cities. That is how we believe "cities as flow" are not necessarily a new paradigm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the demand for miniature products and components continues to increase, the need for manufacturing processes to provide these products and components has also increased. To meet this need, successful macroscale processes are being scaled down and applied at the microscale. Unfortunately, many challenges have been experienced when directly scaling down macro processes. Initially, frictional effects were believed to be the largest challenge encountered. However, in recent studies it has been found that the greatest challenge encountered has been with size effects. Size effect is a broad term that largely refers to the thickness of the material being formed and how this thickness directly affects the product dimensions and manufacturability. At the microscale, the thickness becomes critical due to the reduced number of grains. When surface contact between the forming tools and the material blanks occur at the macroscale, there is enough material (hundreds of layers of material grains) across the blank thickness to compensate for material flow and the effect of grain orientation. At the microscale, there may be under 10 grains across the blank thickness. With a decreased amount of grains across the thickness, the influence of the grain size, shape and orientation is significant. Any material defects (either natural occurring or ones that occur as a result of the material preparation) have a significant role in altering the forming potential. To date, various micro metal forming and micro materials testing equipment setups have been constructed at the Michigan Tech lab. Initially, the research focus was to create a micro deep drawing setup to potentially build micro sensor encapsulation housings. The research focus shifted to micro metal materials testing equipment setups. These include the construction and testing of the following setups: a micro mechanical bulge test, a micro sheet tension test (testing micro tensile bars), a micro strain analysis (with the use of optical lithography and chemical etching) and a micro sheet hydroforming bulge test. Recently, the focus has shifted to study a micro tube hydroforming process. The intent is to target fuel cells, medical, and sensor encapsulation applications. While the tube hydroforming process is widely understood at the macroscale, the microscale process also offers some significant challenges in terms of size effects. Current work is being conducted in applying direct current to enhance micro tube hydroforming formability. Initially, adding direct current to various metal forming operations has shown some phenomenal results. The focus of current research is to determine the validity of this process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.