959 resultados para empirical testing
Resumo:
The morphology of Acheulean handaxes continues to be a subject of debate amongst Lower Palaeolithic archaeologists, with some arguing that many handaxes are over-engineered for a subsistence function alone. This study aims to provide an empirical foundation for these debates by testing the relationship between a range of morphological variables, including symmetry, and the effectiveness of handaxes for butchery. Sixty handaxes were used to butcher 30 fallow deer by both a professional and a non-professional butcher. Regression analysis on the resultant data set indicates that while frontal symmetry may explain a small amount of variance in the effectiveness of handaxes for butchery, a large percentage of variance remains unexplained by symmetry or any of the other morphological variables under consideration.
Resumo:
Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters average soil thermal conductivity, specific beat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated) are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981-August 1990) were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R-2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R-2 -values of the testing period were between 0.87 and 0.94 at a depth of 20cm. and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means, that the model is suitable for addition to catchment scale models.
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
The physical and empirical relationships used by microphysics schemes to control the rate at which vapor is transferred to ice crystals growing in supercooled clouds are compared with laboratory data to evaluate the realism of various model formulations. Ice crystal growth rates predicted from capacitance theory are compared with measurements from three independent laboratory studies. When the growth is diffusion- limited, the predicted growth rates are consistent with the measured values to within about 20% in 14 of the experiments analyzed, over the temperature range −2.5° to −22°C. Only two experiments showed significant disagreement with theory (growth rate overestimated by about 30%–40% at −3.7° and −10.6°C). Growth predictions using various ventilation factor parameterizations were also calculated and compared with supercooled wind tunnel data. It was found that neither of the standard parameterizations used for ventilation adequately described both needle and dendrite growth; however, by choosing habit-specific ventilation factors from previous numerical work it was possible to match the experimental data in both regimes. The relationships between crystal mass, capacitance, and fall velocity were investigated based on the laboratory data. It was found that for a given crystal size the capacitance was significantly overestimated by two of the microphysics schemes considered here, yet for a given crystal mass the growth rate was underestimated by those same schemes because of unrealistic mass/size assumptions. The fall speed for a given capacitance (controlling the residence time of a crystal in the supercooled layer relative to its effectiveness as a vapor sink, and the relative importance of ventilation effects) was found to be overpredicted by all the schemes in which fallout is permitted, implying that the modeled crystals reside for too short a time within the cloud layer and that the parameterized ventilation effect is too strong.
Resumo:
he construction market around the world has witnessed the growing eminence of construction professional services (CPSs), such as urban planning, architecture, engineering, and consultancy, while the traditional contracting sector remains strong. Nowadays, it is not uncommon to see a design firm taking over the work of a traditional main contractor, or vice versa, of overseeing the delivery of a project. Although the two sectors of contracting and CPS share the same purpose of materializing the built environment, they are as different as they are interrelated. Much has been mentioned about the nexus between the two but little has been done to articulate it using empirical evidence. This study examined the nexus between contracting and CPS businesses by offering and testing lead-lag effects between the two sectors in the international market. A longitudinal panel data composed of 23 top international contractors and CPS firms was adopted. Surprisingly, results of the panel data analyses show that CPS business does not have a significant positive causal effect on contracting as a downstream business, and vice versa. CPS and contracting subsidiaries, although within the same company, do not necessarily form a consortium to undertake the same project; rather, they often collaborate with other CPS or contracting counterparts to undertake projects. This paper provides valuable insights into the sophisticated nexus between contracting and CPS in the international construction market. It will support business executives’ rational decision making for selecting proper contracting or CPS allies, or a proper mergers and acquisitions strategy in the international market. The paper also provides a fresh perspective through which researchers can better investigate the diversification strategies adopted by international contracting and CPS firms.
Resumo:
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper studies a smooth-transition (ST) type cointegration. The proposed ST cointegration allows for regime switching structure in a cointegrated system. It nests the linear cointegration developed by Engle and Granger (1987) and the threshold cointegration studied by Balke and Fomby (1997). We develop F-type tests to examine linear cointegration against ST cointegration in ST-type cointegrating regression models with or without time trends. The null asymptotic distributions of the tests are derived with stationary transition variables in ST cointegrating regression models. And it is shown that our tests have nonstandard limiting distributions expressed in terms of standard Brownian motion when regressors are pure random walks, while have standard asymptotic distributions when regressors contain random walks with nonzero drift. Finite-sample distributions of those tests are studied by Monto Carlo simulations. The small-sample performance of the tests states that our F-type tests have a better power when the system contains ST cointegration than when the system is linearly cointegrated. An empirical example for the purchasing power parity (PPP) data (monthly US dollar, Italy lira and dollar-lira exchange rate from 1973:01 to 1989:10) is illustrated by applying the testing procedures in this paper. It is found that there is no linear cointegration in the system, but there exits the ST-type cointegration in the PPP data.
Resumo:
Abstract In a case study about viewing habits in a Swedish audience I sampled 309 questionnaires; interviews with five focus group were conducted together with ten in-depth individual interviews discussing altogether fifteen favorite films exploring specific scenes of idiosyncratic relevance. The outcome supports claims about viewers as active and playful (cf. Höijer 1998, Frampton 2006, Hoover 2006, Plantinga 2009). In line with mediatization theory I also argue that spiritual meaning making takes place through mediated experiences and I support theories about fiction films as important sources for moral and spiritual reflection (Partridge 2004, Zillman 2005, Lynch 2007, Plantinga 2009). What Hjarvard calls the soft side of mediatization processes (2008) is illustrated showing adults experiencing enchantment through favorite films (Jerslev 2006, Partridge 2008, Klinger 2008, Oliver & Hartmann 2010). Vernacular meaning making embedded in everyday life and spectators dealing with fiction narratives such as Gladiator, Amelie from Montmartre or Avatar highlights the need for a more nuanced understanding of elevated cinematic experiences. The reported impact of specific movies is analyzed through theories where cognition and affect are central aspects of spectators’ engagements with a film (Tan 1996, Caroll 1999, Grodal 2009). Crucially important are theories of meaning-making where viewers’ detailed interpretation of specific scenes are embedded in high-level meaning-making where world view issues and spectators’ moral frameworks are activated (Zillman 2005, Andersson & Andersson 2005, Frampton 2006, Lynch 2007, Avila 2007, Axelson 2008, Plantinga 2009). Also results from a growing body of empirical oriented research in film studies are relevant with an interest in what happens with the flesh and blood spectator exposed to filmic narratives (Jerslev 2006, Klinger 2008, Barker 2009, Suckfüll 2010, Oliver & Hartmann 2010). Analyzing the qualitative results of my case study, I want to challenge the claim that the viewer has to suspend higher order reflective cognitive structures in order to experience suture (Butler & Palesh 2004). What I find in my empirical examples is responses related to spectators’ highest levels of mental activity, all anchored in the sensual-emotional apparatus (Grodal 2009). My outcome is in line with a growing number of empirical case studies which support conclusions that both thinking and behavior are affected by film watching (Marsh 2007, Sückfull 2010, Oliver & Hartmann 2010, Axelson forthcoming). The presentation contributes to a development of concepts which combines aesthetic, affective and cognitive components in an investigation of spectator’s moves from emotional evaluation of intra-text narration to extra-textual assessments, testing the narrative for larger significance in idiosyncratic ways (Bordwell & Thompson 1997, Marsh 2007, Johnston 2007, Bruun Vaage 2009, Axelson 2011). There are a several profitable concepts suggested to embrace the complex interplay between affects, cognition and emotions when individuals respond to fictional narratives. Robert K. Johnston label it “deepening gaze” (2007: 307) and “transformative viewing” (2007: 305). Philosopher Mitch Avila proposes “high cognition” (2007: 228) and Casper Thybjerg ”higher meaning” (2008: 60). Torben Grodal talks about “feelings of deep meaning” (Grodal 2009: 149). With a nod to Clifford Geertz, Craig Detweiler adopts “thick description” (2007: 47) as do Kutter Callaway altering it to ”thick interpretations” (Callaway 2013: 203). Frampton states it in a paradox; ”affective intelligence” (Frampton 2006: 166). As a result of the empirical investigation, inspired by Geertz, Detweiler & Callaway, I advocate thick viewing for capturing the viewing process of these specific moments of film experience when profound and intensified emotional interpretations take place. The author As a sociologist of religion, Tomas Axelsons research deals with people’s use of mediated narratives to make sense of reality in a society characterized by individualization, mediatization and pluralized world views. He explores uses of fiction film as a resource in every day life and he is currently finishing his three year project funded by the Swedish Research Council: Spectator engagement in film and utopian self-reflexivity. Moving Images and Moved Minds. http://www.du.se/sv/AVM/Personal/Tomas-Axelson Bibliography Axelson, T. (Forthcoming 2014). Den rörliga bildens förmåga att beröra.[1] Stockholm: Liber Axelson, T. (In peer review). Vernacular Meaning Making. Examples of narrative impact in fiction film questioning the ’banal’ notion in mediatization theory. Nordicom Review. Nordicom Göteborg. Axelson, T. (2011). Människans behov av fiktion. Den rörliga bildens förmåga att beröra människan på djupet.[2]Kulturella perspektiv. Volume 2. Article retrieved from www.kultmed.umu.se/digitalAssets/74/74304_axelson-22011.pdf Axelson, Tomas (2010) “Narration, Visualization and Mind. Movies in everyday life as a resource for utopian self-reflection.” Paper presentation at CMRC, 7th Conference of Media, Religion & Culture in Toronto, Canada 9 – 13th August 2010. Axelson, Tomas (2008) Movies and Meaning. Studying Audience, Favourite Films and Existential Matters. Particip@tions : Journal of Audience and Reception Studies. Volume 5, (1). Doctoral dissertation summary. ACTA UNIVERSITATIS UPSALIENSIS. Article retrieved from http://www.participations.org/Volume%205/Issue%201%20-%20special/5_01_axelson.htm [1] English translation: Moving Images and Moved Minds. [2] English translation: Our need for fiction. Deeply Moved by Moving Images. Cultural Perspectives.
Resumo:
Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.
Resumo:
While the simulation of flood risks originating from the overtopping of river banks is well covered within continuously evaluated programs to improve flood protection measures, flash flooding is not. Flash floods are triggered by short, local thunderstorm cells with high precipitation intensities. Small catchments have short response times and flow paths and convective thunder cells may result in potential flooding of endangered settlements. Assessing local flooding and pathways of flood requires a detailed hydraulic simulation of the surface runoff. Hydrological models usually do not incorporate surface runoff at this detailedness but rather empirical equations are applied for runoff detention. In return 2D hydrodynamic models usually do not allow distributed rainfall as input nor are any types of soil/surface interaction implemented as in hydrological models. Considering several cases of local flash flooding during the last years the issue emerged for practical reasons but as well as research topics to closing the model gap between distributed rainfall and distributed runoff formation. Therefore, a 2D hydrodynamic model, depth-averaged flow equations using the finite volume discretization, was extended to accept direct rainfall enabling to simulate the associated runoff formation. The model itself is used as numerical engine, rainfall is introduced via the modification of waterlevels at fixed time intervals. The paper not only deals with the general application of the software, but intends to test the numerical stability and reliability of simulation results. The performed tests are made using different artificial as well as measured rainfall series as input. Key parameters of the simulation such as losses, roughness or time intervals for water level manipulations are tested regarding their impact on the stability.
Resumo:
The goal of this paper is to show the possibility of a non-monotone relation between coverage ans risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuous parameter which is correlated with lenience and for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cosr of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and iplies a positive correlation between overage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the SCP be broken, but also the monotonocity of contracts, i.e., the prediction that high (low) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case there are some coverage levels associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation between coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to desentangle single crossing ans non single croosing under an ex-post zero correlation result: the monotonicity of coverage as a function os riskiness. Since by controlling for risk aversion (no asymmetric information), coverage is monotone function of riskiness, this also fives a test for asymmetric information. Finally, we relate this theoretical results to empirical tests in the recent literature, specially the Dionne, Gouruéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variables (asymmetric information) can bias the sign of the correlation of equilibrium variables conditioning on all observable variables. We show that this may be the case when the omitted variables have a non-monotonic relation with the observable ones. Moreover, because this non-dimensional does not capture this deature. Hence, our main results is to point out the importance of the SPC in testing predictions of the hidden information models.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
The goal of t.his paper is to show the possibility of a non-monot.one relation between coverage and risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuou.'l parameter which is correlated with lenience and, for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cost of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and implies a positive correlation between coverage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the sep be broken, but also the monotonicity of contracts, i.e., the prediction that high (Iow) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case t,here are some coverage leveIs associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation bet,ween coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to disentangle single crossing and non single crossing under an ex-post zero correlation result: the monotonicity of coverage as a function of riskiness. Since by controlling for risk aversion (no asymmetric informat, ion), coverage is a monotone function of riskiness, this also gives a test for asymmetric information. Finally, we relate this theoretical results to empirica! tests in the recent literature, specially the Dionne, Gouriéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variabIes (asymmetric information) can bias the sign of the correlation of equilibrium variabIes conditioning on ali observabIe variabIes. We show that this may be t,he case when the omitted variabIes have a non-monotonic reIation with t,he observable ones. Moreover, because this non-monotonic reIat,ion is deepIy reIated with the failure of the SCP in one-dimensional screening problems, the existing lit.erature on asymmetric information does not capture t,his feature. Hence, our main result is to point Out the importance of t,he SCP in testing predictions of the hidden information models.