922 resultados para Empirical Flow Models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis combines valuation and behavioral economics literature, which is not common among the Finnish management accounting research. Furthermore, the valuation is studied in biotechnology context and those type of studies are rather rare as well. The thesis studies the valuation in the Finnish biotechnology industry. The concepts of behavioral finance are employed in the empirical part of the study to explore decision-makers’ behavior in valuation processes. The main interest of this study is to explore how subjectivity of a decision-maker affects the valuation in the biotechnology industry. The valuation is studied from two perspectives. First, what is the best valuation model for biotechnology companies suggested by the valuation literature? Second, how the valuation in biotechnology industry is done in practice and how the decision-makers subjectivity affects the valuation? The literature review aims at seeking the best valuation model. The real options were found to be the most suitable valuation model for biotechnology companies, especially in the early stages of product development. The real option’s ability to take the value of the inherent options into account results in theoretically most correct valuations. The only disadvantage is the model’s complexity when compared to other models, such as discounted cash flow models. The empirical part of the study consists of a case study, which examines the valuation practices of the Finnish biotechnology companies. When it comes to the valuation models used in practice, it was found that the companies were using rather simple valuation models, which was due to two reasons. First, the interviewees did not believe in the valuation models and second, they were familiar neither with the most sophisticated models nor with all the theoretical aspects of the models they were using. The material for the study was collected with theme interviews. Four CEO’s of highly successful Finnish biotechnology companies. Strong signs of the decision-makers’ subjectivity in valuation were observed. Most obvious were the signs of framing. Furthermore, herding, excessive optimism, and overconfidence were present. All the behavioral concepts observed most likely have a severe effect on the valuation. As a result, the valuation can easily become overly optimistic, which leads to overvalued investments and to continuation of already unprofitable projects. Framing had the strongest evidence. If the product being valued is framed successfully, the risk of overvaluation is high, thus a strong belief can justify almost any value.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The knowledge of the slug flow characteristics is very important when designing pipelines and process equipment. When the intermittences typical in slug flow occurs, the fluctuations of the flow variables bring additional concern to the designer. Focusing on this subject the present work discloses the experimental data on slug flow characteristics occurring in a large-size, large-scale facility. The results were compared with data provided by mechanistic slug flow models in order to verify their reliability when modelling actual flow conditions. Experiments were done with natural gas and oil or water as the liquid phase. To compute the frequency and velocity of the slug cell and to calculate the length of the elongated bubble and liquid slug one used two pressure transducers measuring the pressure drop across the pipe diameter at different axial locations. A third pressure transducer measured the pressure drop between two axial location 200 m apart. The experimental data were compared with results of Camargo's1 algorithm (1991, 1993), which uses the basics of Dukler & Hubbard's (1975) slug flow model, and those calculated by the transient two-phase flow simulator OLGA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Real estate depreciation continues to be a critical issue for investors and the appraisal profession in the UK in the 1990s. Depreciation-sensitive cash flow models have been developed, but there is a real need to develop further empirical methodologies to determine rental depreciation rates for input into these models. Although building quality has been found to be an important explanatory variable in depreciation it is very difficult to incorporate it into such models or to analyse it retrospectively. It is essential to examine previous depreciation research from real estate and economics in the USA and UK to understand the issues in constructing a valid and pragmatic way of calculating rental depreciation. Distinguishing between 'depreciation' and 'obsolescence' is important, and the pattern of depreciation in any study can be influenced by such factors as the type (longitudinal or crosssectional) and timing of the study, and the market state. Longitudinal studies can analyse change more directly than cross-sectional studies. Any methodology for calculating rental depreciation rate should be formulated in the context of such issues as 'censored sample bias', 'lemons' and 'filtering', which have been highlighted in key US literature from the field of economic depreciation. Property depreciation studies in the UK have tended to overlook this literature, however. Although data limitations and constraints reduce the ability of empirical property depreciation work in the UK to consider these issues fully, 'averaging' techniques and ordinary least squares (OLS) regression can both provide a consistent way of calculating rental depreciation rates within a 'cohort' framework.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The data of four networks that can be used in carrying out comparative studies with methods for transmission network expansion planning are given. These networks are of various types and different levels of complexity. The main mathematical formulations used in transmission expansion studies-transportation models, hybrid models, DC power flow models, and disjunctive models are also summarised and compared. The main algorithm families are reviewed-both analytical, combinatorial and heuristic approaches. Optimal solutions are not yet known for some of the four networks when more accurate models (e.g. The DC model) are used to represent the power flow equations-the state of the art with regard to this is also summarised. This should serve as a challenge to authors searching for new, more efficient methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clinical oncologists and cancer researchers benefit from information on the vascularization or non-vascularization of solid tumors because of blood flow's influence on three popular treatment types: hyperthermia therapy, radiotherapy, and chemotherapy. The objective of this research is the development of a clinically useful tumor blood flow measurement technique. The designed technique is sensitive, has good spatial resolution, in non-invasive and presents no risk to the patient beyond his usual treatment (measurements will be subsequent only to normal patient treatment).^ Tumor blood flow was determined by measuring the washout of positron emitting isotopes created through neutron therapy treatment. In order to do this, several technical and scientific questions were addressed first. These questions were: (1) What isotopes are created in tumor tissue when it is irradiated in a neutron therapy beam and how much of each isotope is expected? (2) What are the chemical states of the isotopes that are potentially useful for blood flow measurements and will those chemical states allow these or other isotopes to be washed out of the tumor? (3) How should isotope washout by blood flow be modeled in order to most effectively use the data? These questions have been answered through both theoretical calculation and measurement.^ The first question was answered through the measurement of macroscopic cross sections for the predominant nuclear reactions in the body. These results correlate well with an independent mathematical prediction of tissue activation and measurements of mouse spleen neutron activation. The second question was addressed by performing cell suspension and protein precipitation techniques on neutron activated mouse spleens. The third and final question was answered by using first physical principles to develop a model mimicking the blood flow system and measurement technique.^ In a final set of experiments, the above were applied to flow models and animals. The ultimate aim of this project is to apply its methodology to neutron therapy patients. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The influence of climate on forest stand composition, development and growth is undeniable. Many studies have tried to quantify the effect of climatic variables on forest growth and yield. These works become especially important because there is a need to predict the effects of climate change on the development of forest ecosystems. One of the ways of facing this problem is the inclusion of climatic variables into the classic empirical growth models. The work has a double objective: (i) to identify the indicators which best describe the effect of climate on Pinus halepensis growth and (ii) to quantify such effect in several scenarios of rainfall decrease which are likely to occur in the Mediterranean area. A growth mixed model for P. halepensis including climatic variables is presented in this work. Growth estimates are based on data from the Spanish National Forest Inventory (SNFI). The best results are obtained for the indices including rainfall, or rainfall and temperature together, with annual precipitation, precipitation effectiveness, Emberger?s index or free bioclimatic intensity standing out among them. The final model includes Emberger?s index, free bioclimatic intensity and interactions between competition and climate indices. The results obtained show that a rainfall decrease about 5% leads to a decrease in volume growth of 5.5?7.5% depending on site quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As part of their development, the predictions of numerical wind flow models must be compared with measurements in order to estimate the uncertainty related to their use. Of course, the most rigorous such comparison is under blind conditions. The following paper includes a detailed description of three different wind flow models, all based on a Reynolds-averaged Navier-Stokes approach and two-equation k-ε closure, that were tested as part of the Bolund blind comparison (itself based on the Bolund experiment which measured the wind around a small coastal island). The models are evaluated in terms of predicted normalized wind speed and turbulent kinetic energy at 2 m and 5 m above ground level for a westerly wind direction. Results show that all models predict the mean velocity reasonably well; however accurate prediction of the turbulent kinetic energy remains achallenge.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta tesis doctoral se ha centrado en el estudio de las cargas aerodinámicas no estacionario en romos cuerpos o no aerodinámicos (bluff bodies). Con este objetivo se han identificado y analizado los siguientes puntos: -Caracterización del flujo medido con diferentes tipos de tubos de Pitot y anemómetro de hilo caliente en condiciones de flujo no estacionario inestable generado por un túnel aerodinamico de ráfagas. -Diseño e integración de los montajes experimentales requeridos para medir las cargas de viento internas y externas que actúan sobre los cuerpos romos en condiciones de flujo de viento con ráfagas. -Implementación de modelos matemáticos semi-empíricos basados en flujo potencial y las teorías fenomenológicas pertinentes para simular los resultados experimentales. -En diversan condiciones de flujo con ráfagas, la identificación y el análisis de la influencia de los parámetros obtenida a partir de los modelos teóricos desarrollados. -Se proponen estimaciones empíricas para averiguar los valores adecuados de los parámetros que influyente, mediante el ajuste de los resultados experimentales y los predichos teóricamente. Los montajes experimentales se has reakizado en un tunel aerodinamico de circuito abierto, provisto de baja velocidad, cámara de ensayes cerrada, un nuevo concepto de mecanismo generador de ráfaga sinusoidal, diseñado y construido en el Instituto de Microgravedad "Ignacio Da Riva" de la Universidad Politécnica de Madrid, (IDR / UPM). La principal característica de este túnel aerodynamico es la capacidad de generar un flujo con un perfil de velocidad uniforme y una fluctuación sinusoidal en el tiempo. Se han realizado pruebas experimentales para estudiar el efecto de los flujos no estacionarios en cuerpos romos situados en el suelo. Se han propuesto dos modelos teóricos para diterminar las cargas de presión externas e internas respectivamente. Con el fin de satisfacer la necesidad de la crea ráfagas de viento sinusoidales para comprobar las predicciones de los modelos teóricos, se han obtenido velocidades de hasta 30 m/s y frecuencias ráfaga de hasta 10 Hz. La sección de la cámara de ensayos es de 0,39 m x 0,54 m, dimensiónes adecuadas para llevar a cabo experimentos con modelos de ensayos. Se muestra que en la gama de parámetros explorados los resultados experimentales están en buen acuerdo con las predicciones de los modelos teóricos. Se han realizado pruebas experimentales para estudiar los efectos del flujo no estacionario, las cuales pueden ayudar a aclarar el fenómeno de las cargas de presión externa sobre los cuerpos romos sometidos a ráfagas de viento: y tambien para determinan las cargas de presión interna, que dependen del tamaño de los orificios de ventilación de la construcción. Por último, se ha analizado la contribución de los términos provenientes del flujo no estacionario, y se han caracterizado o los saltos de presión debido a la pérdida no estacionario de presión a través de los orificios de ventilación. ABSTRACT This Doctoral dissertation has been focused to study the unsteady aerodynamic loads on bluff bodies. To this aim the following points have been identified and analyzed: -Characterization of the flow measured with different types of Pitot tubes and hot wire anemometer at unsteady flow conditions generated by a gust wind tunnel. -Design and integrating of the experimental setups required to measure the internal and external wind loads acting on bluff bodies at gusty wind flow conditions. -Implementation of semi-empirical mathematical models based on potential flow and relevant phenomenological theories to simulate the experimental results.-At various gusty flow conditions, extracting and analyzing the influence of parameters obtained from the developed theoretical models. -Empirical estimations are proposed to find out suitable values of the influencing parameters, by fitting the experimental and theoretically predicted results. The experimental setups are performed in an open circuit, closed test section, low speed wind tunnel, with a new sinusoidal gust generator mechanism concept, designed and built at the Instituto de Microgravedad “Ignacio Da Riva” of the Universidad Politécnica de Madrid, (IDR/UPM). The main characteristic of this wind tunnel is the ability to generate a flow with a uniform velocity profile and a sinusoidal time fluctuation of the speed. Experimental tests have been devoted to study the effect of unsteady flows on bluff bodies lying on the ground. Two theoretical models have been proposed to measure the external and internal pressure loads respectively. In order to meet the need of creating sinusoidal wind gusts to check the theoretical model predictions, the gust wind tunnel maximum flow speed and, gust frequency in the test section have been limited to 30 m/s and 10 Hz, respectively have been obtained. The test section is 0.39 m × 0.54 m, which is suitable to perform experiments with testing models. It is shown that, in the range of parameters explored, the experimental results are in good agreement with the theoretical model predictions. Experimental tests have been performed to study the unsteady flow effects, which can help in clarifying the phenomenon of the external pressure loads on bluff bodies under gusty winds: and also to study internal pressure loads, which depend on the size of the venting holes of the building. Finally, the contribution of the unsteady flow terms in the theoretical model has been analyzed, and the pressure jumps due to the unsteady pressure losses through the venting holes have been characterized.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper presents a new network-flow interpretation of Łukasiewicz’s logic based on models with an increased effectiveness. The obtained results show that the presented network-flow models principally may work for multivalue logics with more than three states of the variables i.e. with a finite set of states in the interval from 0 to 1. The described models give the opportunity to formulate various logical functions. If the results from a given model that are contained in the obtained values of the arc flow functions are used as input data for other models then it is possible in Łukasiewicz’s logic to interpret successfully other sophisticated logical structures. The obtained models allow a research of Łukasiewicz’s logic with specific effective methods of the network-flow programming. It is possible successfully to use the specific peculiarities and the results pertaining to the function ‘traffic capacity of the network arcs’. Based on the introduced network-flow approach it is possible to interpret other multivalue logics – of E.Post, of L.Brauer, of Kolmogorov, etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.