820 resultados para variable interest entity
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
We test whether there are nonlinearities in the response of short- and long-term interest rates to the spread in interest rates, and assess the out-of-sample predictability of interest rates using linear and nonlinear models. We find strong evidence of nonlinearities in the response of interest rates to the spread. Nonlinearities are shown to result in more accurate short-horizon forecasts, especially of the spread.
Resumo:
The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants’ eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants’ reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time.
Resumo:
This paper considers the effect of short- and long-term interest rates, and interest rate spreads upon real estate index returns in the UK. Using Johansen's vector autoregressive framework, it is found that the real estate index cointegrates with the term spread, but not with the short or long rates themselves. Granger causality tests indicate that movements in short term interest rates and the spread cause movements in the returns series. However, decomposition of the forecast error variances from VAR models indicate that changes in these variables can only explain a small proportion of the overall variability of the returns, and that the effect has fully worked through after two months. The results suggest that these financial variables could potentially be used as leading indicators for real estate markets, with corresponding implications for return predictability.
Resumo:
Despite the increasing number of studies examining the correlates of interest and boredom, surprisingly little research has focused on within-person fluctuations in these emotions, making it difficult to describe their situational nature. To address this gap in the literature, this study conducted repeated measurements (12 times) on a sample of 158 undergraduate students using a variety of self-report assessments, and examined the within-person relationships between task-specific perceptions (expectancy, utility, and difficulty) and interest and boredom. This study further explored the role of achievement goals in predicting between-person differences in these within-person relationships. Utilizing hierarchical-linear modeling, we found that, on average, a higher perception of both expectancy and utility, as well as a lower perception of difficulty, was associated with higher interest and lower boredom levels within individuals. Moreover, mastery-approach goals weakened the negative within-person relationship between difficulty and interest and the negative within-person relationship between utility and boredom. Mastery-avoidance and performance-avoidance goals strengthened the negative relationship between expectancy and boredom. These results suggest how educators can more effectively instruct students with different types of goals, minimizing boredom and maximizing interest and learning.
Resumo:
Over Arctic sea ice, pressure ridges and floe andmelt pond edges all introduce discrete obstructions to the flow of air or water past the ice and are a source of form drag. In current climate models form drag is only accounted for by tuning the air–ice and ice–ocean drag coefficients, that is, by effectively altering the roughness length in a surface drag parameterization. The existing approach of the skin drag parameter tuning is poorly constrained by observations and fails to describe correctly the physics associated with the air–ice and ocean–ice drag. Here, the authors combine recent theoretical developments to deduce the total neutral form drag coefficients from properties of the ice cover such as ice concentration, vertical extent and area of the ridges, freeboard and floe draft, and the size of floes and melt ponds. The drag coefficients are incorporated into the Los Alamos Sea Ice Model (CICE) and show the influence of the new drag parameterization on the motion and state of the ice cover, with the most noticeable being a depletion of sea ice over the west boundary of the Arctic Ocean and over the Beaufort Sea. The new parameterization allows the drag coefficients to be coupled to the sea ice state and therefore to evolve spatially and temporally. It is found that the range of values predicted for the drag coefficients agree with the range of values measured in several regions of the Arctic. Finally, the implications of the new form drag formulation for the spinup or spindown of the Arctic Ocean are discussed.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
Whilst common sense knowledge has been well researched in terms of intelligence and (in particular) artificial intelligence, specific, factual knowledge also plays a critical part in practice. When it comes to testing for intelligence, testing for factual knowledge is, in every-day life, frequently used as a front line tool. This paper presents new results which were the outcome of a series of practical Turing tests held on 23rd June 2012 at Bletchley Park, England. The focus of this paper is on the employment of specific knowledge testing by interrogators. Of interest are prejudiced assumptions made by interrogators as to what they believe should be widely known and subsequently the conclusions drawn if an entity does or does not appear to know a particular fact known to the interrogator. The paper is not at all about the performance of machines or hidden humans but rather the strategies based on assumptions of Turing test interrogators. Full, unedited transcripts from the tests are shown for the reader as working examples. As a result, it might be possible to draw critical conclusions with regard to the nature of human concepts of intelligence, in terms of the role played by specific, factual knowledge in our understanding of intelligence, whether this is exhibited by a human or a machine. This is specifically intended as a position paper, firstly by claiming that practicalising Turing's test is a useful exercise throwing light on how we humans think, and secondly, by taking a potentially controversial stance, because some interrogators adopt a solipsist questioning style of hidden entities with a view that it is a thinking intelligent human if it thinks like them and knows what they know. The paper is aimed at opening discussion with regard to the different aspects considered.
Resumo:
The analysis step of the (ensemble) Kalman filter is optimal when (1) the distribution of the background is Gaussian, (2) state variables and observations are related via a linear operator, and (3) the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA) can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1)-(3) are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.
Resumo:
Although no GM crops currently are licensed for commercial production in the UK, as opposition to GM crops by consumers softens, this could change quickly. Although past studies have examined attitudes of UK farmers toward GM technologies in general, there has been little work on the impact of possible coexistence measures on their attitudes toward GM crop production. This could be because the UK Government has not engaged in any public dialogue on the coexistence measures that might be applied on farms. Based on a farm survey, this article examines farmers’ attitudes toward GM technologies and planting intentions for three crops (maize, oilseed rape, and sugar beet) based on a GM availability scenario. The article then nuances this analysis with a review of farmer perceptions of the level of constraint associated with a suite of notional farm-level coexistence measures and issues, based on current European Commission guidelines and practice in other EU Member States.
Resumo:
Discussion of the national interest often focuses on how Britain's influence can be maximized, rather than on the goals that influence serves. Yet what gives content to claims about the national interest is the means-ends reasoning which links interests to deeper goals. In ideal-typical terms, this can take two forms. The first, and more common, approach is conservative: it infers national interests and the goals they advance from existing policies and commitments. The second is reformist: it starts by specifying national goals and then asks how they are best advanced under particular conditions. New Labour's foreign policy discourse is notable for its explicit use of a reformist approach. Indeed, Gordon Brown's vision of a 'new global society' not only identifies global reform as a key means of fulfilling national goals, but also thereby extends the concept of the national interest well beyond a narrow concern with national security.
Resumo:
A procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) is proposed for operational rainfall estimation using rain gauges and radar data. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on Barnes' objective analysis scheme (OAS), whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, a spatially variable adjustment with multiplicative factors, and ordinary cokriging.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.