973 resultados para First order autoregressive model AR (1)
Resumo:
Este trabalho apresenta as curvas de resfriamento de banana-prata (Musa balbisiana Colla) e os valores do tempo de meio e sete oitavos de resfriamento, partindo do cálculo da Taxa Adimensional de Temperatura. Os frutos foram resfriados num sistema com ar forçado a 7ºC, umidade relativa de 87,6±3,8%, e velocidade do ar entre 1 e 0,2 m/s. Aplicou-se um delineamento experimental inteiramente casualizado, usando um esquema fatorial 2x2 (dois fluxos de ar (fatores) e duas embalagens (níveis)), para um nível de significância de 10%. Os fluxos de ar foram 1.933 a 1.160 m³/h, e as embalagens se diferenciaram pela porcentagem de área de abertura disponível para a ventilação (40 e 3,2%). Foi constatada uma diferença significativa no tempo de resfriamento, tanto quando aplicadas as duas taxas de ar como quando usadas as duas embalagens. O menor tempo de resfriamento foi atingido no tratamento que combinou a maior taxa de ar (1.933 m³/h) com a embalagem de maior área de aberturas (40%). O maior tempo de resfriamento foi atingido no tratamento que combinou a menor taxa de ar (1160 m³/h) com a embalagem de 3,2% de área efetiva de abertura. Os resultados obtidos demonstram que o tempo de resfriamento depende, em grande medida, da taxa de ar e do tipo de embalagem usada. O tempo de resfriamento variou em média entre 117 a 555 min, dependendo do tratamento aplicado. Não se constatou diferença significativa nas perdas de massa entre os diferentes tratamentos.
Resumo:
O objetivo deste trabalho foi de avaliar o efeito do 1-MCP aplicado em diferentes épocas durante o armazenamento refrigerado (AR) e em atmosfera controlada (AC) sobre a qualidade do caqui cv. Quioto. O delineamento experimental utilizado foi o inteiramente casualizado, com quatro repetições de 30 frutos, e os tratamentos foram os seguintes: armazenamento refrigerado (AR); armazenamento refrigerado(AR) + 1-MCP (1000ppb) no início do armazenamento; armazenamento refrigerado(AR) +1-MCP (1000ppb) no final do armazenamento; armazenamento em atmosfera controlada (AC) com 1kPa de O2 e AC 5kPa de CO2 e AC com 1kPa de O2 e 5kPa de CO2 + 1-MCP no fim do armazenamento, após 2 meses de armazenamento a -0,5ºC mais 5 dias de exposição dos frutos a 10ºC e 3 dias a 20ºC. No armazenamento refrigerado, o 1-MCP, aplicado tanto no início do armazenamento como no final, proporcionou maior firmeza de polpa. Para os parâmetros: sólidos solúveis totais, podridão e escurecimento da película, não houve diferença estatística entre os tratamentos. Conclui-se que a aplicação de 1-MCP, tanto no início como no final do armazenamento mantém elevada a firmeza de polpa.
Resumo:
The three essays constituting this thesis focus on financing and cash management policy. The first essay aims to shed light on why firms issue debt so conservatively. In particular, it examines the effects of shareholder and creditor protection on capital structure choices. It starts by building a contingent claims model where financing policy results from a trade-off between tax benefits, contracting costs and agency costs. In this setup, controlling shareholders can divert part of the firms' cash ows as private benefits at the expense of minority share- holders. In addition, shareholders as a class can behave strategically at the time of default leading to deviations from the absolute priority rule. The analysis demonstrates that investor protection is a first order determinant of firms' financing choices and that conflicts of interests between firm claimholders may help explain the level and cross-sectional variation of observed leverage ratios. The second essay focuses on the practical relevance of agency conflicts. De- spite the theoretical development of the literature on agency conflicts and firm policy choices, the magnitude of manager-shareholder conflicts is still an open question. This essay proposes a methodology for quantifying these agency conflicts. To do so, it examines the impact of managerial entrenchment on corporate financing decisions. It builds a dynamic contingent claims model in which managers do not act in the best interest of shareholders, but rather pursue private benefits at the expense of shareholders. Managers have discretion over financing and dividend policies. However, shareholders can remove the manager at a cost. The analysis demonstrates that entrenched managers restructure less frequently and issue less debt than optimal for shareholders. I take the model to the data and use observed financing choices to provide firm-specific estimates of the degree of managerial entrenchment. Using structural econometrics, I find costs of control challenges of 2-7% on average (.8-5% at median). The estimates of the agency costs vary with variables that one expects to determine managerial incentives. In addition, these costs are sufficient to resolve the low- and zero-leverage puzzles and explain the time series of observed leverage ratios. Finally, the analysis shows that governance mechanisms significantly affect the value of control and firms' financing decisions. The third essay is concerned with the documented time trend in corporate cash holdings by Bates, Kahle and Stulz (BKS,2003). BKS find that firms' cash holdings double from 10% to 20% over the 1980 to 2005 period. This essay provides an explanation of this phenomenon by examining the effects of product market competition on firms' cash holdings in the presence of financial constraints. It develops a real options model in which cash holdings may be used to cover unexpected operating losses and avoid inefficient closure. The model generates new predictions relating cash holdings to firm and industry characteristics such as the intensity of competition, cash flow volatility, or financing constraints. The empirical examination of the model shows strong support of model's predictions. In addition, it shows that the time trend in cash holdings documented by BKS can be at least partly attributed to a competition effect.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
En aquest treball es recopilen i estudien, des d’una perspectiva etnopaleontològica, les aportacions i influencies exercides pels fòssils en relació al patrimoni onomàstic toponímic relacionat amb cavitats càrstiques de l’àmbit geogràfic de les Illes Balears. Generalment es tracta de microtopònims moderns o recents, en vies de popularització i/o tradicionalització (neotopònims), establerts pels científics que estudien les coves i/o esportistes del món de l’espeleologia (topocultismes). Es poden distingir entre espeleotopònims de primer ordre (quan es refereixen a cavitats completes) i de segon ordre (quan es refereixen a un sector d’una cavitat). També es realitza una primera aproximació als topònims referits a mines d’extracció de carbó fòssil (antracotopònims).
Resumo:
PLFC is a first-order possibilistic logic dealing with fuzzy constants and fuzzily restricted quantifiers. The refutation proof method in PLFC is mainly based on a generalized resolution rule which allows an implicit graded unification among fuzzy constants. However, unification for precise object constants is classical. In order to use PLFC for similarity-based reasoning, in this paper we extend a Horn-rule sublogic of PLFC with similarity-based unification of object constants. The Horn-rule sublogic of PLFC we consider deals only with disjunctive fuzzy constants and it is equipped with a simple and efficient version of PLFC proof method. At the semantic level, it is extended by equipping each sort with a fuzzy similarity relation, and at the syntactic level, by fuzzily “enlarging” each non-fuzzy object constant in the antecedent of a Horn-rule by means of a fuzzy similarity relation.
Resumo:
Two graphs with adjacency matrices $\mathbf{A}$ and $\mathbf{B}$ are isomorphic if there exists a permutation matrix $\mathbf{P}$ for which the identity $\mathbf{P}^{\mathrm{T}} \mathbf{A} \mathbf{P} = \mathbf{B}$ holds. Multiplying through by $\mathbf{P}$ and relaxing the permutation matrix to a doubly stochastic matrix leads to the linear programming relaxation known as fractional isomorphism. We show that the levels of the Sherali--Adams (SA) hierarchy of linear programming relaxations applied to fractional isomorphism interleave in power with the levels of a well-known color-refinement heuristic for graph isomorphism called the Weisfeiler--Lehman algorithm, or, equivalently, with the levels of indistinguishability in a logic with counting quantifiers and a bounded number of variables. This tight connection has quite striking consequences. For example, it follows immediately from a deep result of Grohe in the context of logics with counting quantifiers that a fixed number of levels of SA suffice to determine isomorphism of planar and minor-free graphs. We also offer applications in both finite model theory and polyhedral combinatorics. First, we show that certain properties of graphs, such as that of having a flow circulation of a prescribed value, are definable in the infinitary logic with counting with a bounded number of variables. Second, we exploit a lower bound construction due to Cai, Fürer, and Immerman in the context of counting logics to give simple explicit instances that show that the SA relaxations of the vertex-cover and cut polytopes do not reach their integer hulls for up to $\Omega(n)$ levels, where $n$ is the number of vertices in the graph.
Resumo:
Minimizing the risks of an investment portfolio but not in the favour of expected returns is one of the key interests of an investor. Typically, portfolio diversification is achieved using two main strategies: investing in different classes of assets thought to have little or negative correlations or investing in similar classes of assets in multiple markets through international diversification. This study investigates integration of the Russian financial markets in the time period of January 1, 2003 to December 28, 2007 using daily data. The aim is to test the intra-country and cross-country integration of the Russian stock and bond markets between seven countries. Our test methodology for the short-run dynamics testing is the vector autoregressive model (VAR) and for the long-run cointegration testing we use the Johansen cointegration test which is an extension to VAR. The empirical results of this study show that the Russian stock and bond markets are not integrated in the long-run either at intra-country or cross-country level which means that the markets are relatively segmented. The short-run dynamics are also relatively low. This implies a presence of potential gains from diversification.
Resumo:
Työn tavoitteena on talotekniikkaelementtitehtaan tuotekustannusmallin kehittäminen. Yritys tarvitsee tuotekohtaista kustannustietoa erityisesti varaston arvon määrittämiseen ja kustannusperusteiseen hinnoitteluun. Tuotekustannusmallin kehittäminen edellyttää nykyisen tuotekustannusmallin sisältämien virheiden ja puutteiden korjaamista. Työssä käydään läpi kaikki tuotekustannusmalliin tehdyt muutokset tuotantovaihekohtaisesti. Uusi tuotekustannusmalli perustuu lähinnä lisäys- ja standardikustannuslaskentaan. Malli laskee tuote- ja tilauskohtaisen minimiomakustannusarvon. Uusi tuotekustannusmalli kohdistaa tuotteille aiheuttamisperiaatteen mukaisesti monia sellaisia kustannuksia, jotka aikaisemmin on jätetty kokonaan kohdistamatta. Eristettyjen ilmanvaihtokanavien sekä vesi- ja lämpöjohtojen kustannusten havaittiin olevan aiemmin luultua matalammat. Elementtien runkojen kustannukset puolestaan nousivat uudessa mallissa. Työn aikana laadittiin myös taulukkolaskentasovellus, jonka avulla voidaan laskea tilaukseen sisältyvien elementtien hankintameno ja kustannusperusteinen hinta.
Resumo:
The photodegradation of parathion in natural and dezionised waters was studied under irradiation at two different wavelengths: 280 nm and 313 nm. The influence of humic acids was evaluated. The results demonstrated that the degradation occurred only due to photochemical processes. The chemical hydrolysis and biological processes can be neglected in this case. The addition of humic acids did not increase the photodegradation rate in either water samples (natural or dezionised). In alkaline solutions the photodegradation rate was higher in dezionised water when compared to natural waters. The kinetic degradation in all experiments obeyed a first order reaction pattern.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
The trans-dichlorobis(ethylenediamine)cobalt(III) chloride was synthesized in an undergraduate laboratory and its aquation reaction was carried out at different temperatures. This reaction follows pseudo-first-order kinetics and the rate constants, determined at 25, 35, 45, 55 and 70 º C, are 1.44 x 10-3; 5.14 x 10-3; 1.48 x 10-2; 4.21 x 10-2 and 2.21 x 10-1 s-1, respectively. The activation energy is 93.99 ± 2.88 kJ mol-1.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
The remediation of groundwater containing organochlorine compounds was evaluated using a reductive system with zero-valent iron, and the reductive process coupled with Fenton's reagent. The concentration of the individual target compounds reached up to 400 mg L-1 in the sample. Marked reductions in the chlorinated compounds were observed in the reductive process. The degradation followed pseudo-first-order kinetics in terms of the contaminant and was dependent on the sample contact time with the solid reducing agent. An oxidative test with Fenton's reagent, followed by the reductive assay, showed that tetrachloroethylene was further reduced up to three times the initial concentration. The destruction of chloroform, however, demands an additional treatment.
Resumo:
Three technologies were tested (TiO2/UV, H2O2/UV, and TiO2/H2O2/UV) for the degradation and color removal of a 25 mg L-1 mixture of three acid dyes: Blue 9, Red 18, and Yellow 23. A low speed rotating disc reactor (20 rpm) and a H2O2 concentration of 2.5 mmol L-1 were used. The dyes did not significantly undergo photolysis, although they were all degraded by the studied advanced oxidation processes. With the TiO2/H2O2/UV process, a strong synergism was observed (color removal reached 100%). Pseudo first order kinetic constants were estimated for all processes, as well as the respective apparent photonic efficiencies.