924 resultados para Specification
Resumo:
The assimilation of measurements from the stratosphere and mesosphere is becoming increasingly common as the lids of weather prediction and climate models rise into the mesosphere and thermosphere. However, the dynamics of the middle atmosphere pose specific challenges to the assimilation of measurements from this region. Forecast-error variances can be very large in the mesosphere and this can render assimilation schemes very sensitive to the details of the specification of forecast error correlations. An example is shown where observations in the stratosphere are able to produce increments in the mesosphere. Such sensitivity of the assimilation scheme to misspecification of covariances can also amplify any existing biases in measurements or forecasts. Since both models and measurements of the middle atmosphere are known to have biases, the separation of these sources of bias remains a issue. Finally, well-known deficiencies of assimilation schemes, such as the production of imbalanced states or the assumption of zero bias, are proposed explanations for the inaccurate transport resulting from assimilated winds. The inability of assimilated winds to accurately transport constituents in the middle atmosphere remains a fundamental issue limiting the use of assimilated products for applications involving longer time-scales.
Resumo:
This paper investigates the acquisition of syntax in L2 grammars. We tested adult L2 speakers of Spanish (English L1) on the feature specification of T(ense), which is different in English and Spanish in so-called subject-to-subject raising structures. We present experimental results with the verb parecer “to seem/to appear” in different tenses, with and without experiencers, and with Tense Phrase (TP), verb phrase (vP) and Adjectival Phrase (AP) complements. The results show that advanced L2 learners can perform just like native Spanish speakers regarding grammatical knowledge in this domain, although the subtle differences between both languages are not explicitly taught. We argue that these results support Full Access approaches to Universal Grammar (UG) in L2 acquisition, by providing evidence that uninterpretable syntactic features can be learned in adult L2, even when such features are not directly instantiated in the same grammatical domain in the L1 grammar.
Resumo:
Many macroeconomic series, such as U.S. real output growth, are sampled quarterly, although potentially useful predictors are often observed at a higher frequency. We look at whether a mixed data-frequency sampling (MIDAS) approach can improve forecasts of output growth. The MIDAS specification used in the comparison uses a novel way of including an autoregressive term. We find that the use of monthly data on the current quarter leads to significant improvement in forecasting current and next quarter output growth, and that MIDAS is an effective way to exploit monthly data compared with alternative methods.
Resumo:
Proneural genes such as Ascl1 are known to promote cell cycle exit and neuronal differentiation when expressed in neural progenitor cells. The mechanisms by which proneural genes activate neurogenesis--and, in particular, the genes that they regulate--however, are mostly unknown. We performed a genome-wide characterization of the transcriptional targets of Ascl1 in the embryonic brain and in neural stem cell cultures by location analysis and expression profiling of embryos overexpressing or mutant for Ascl1. The wide range of molecular and cellular functions represented among these targets suggests that Ascl1 directly controls the specification of neural progenitors as well as the later steps of neuronal differentiation and neurite outgrowth. Surprisingly, Ascl1 also regulates the expression of a large number of genes involved in cell cycle progression, including canonical cell cycle regulators and oncogenic transcription factors. Mutational analysis in the embryonic brain and manipulation of Ascl1 activity in neural stem cell cultures revealed that Ascl1 is indeed required for normal proliferation of neural progenitors. This study identified a novel and unexpected activity of the proneural gene Ascl1, and revealed a direct molecular link between the phase of expansion of neural progenitors and the subsequent phases of cell cycle exit and neuronal differentiation.
Resumo:
The past few years have seen major advances in the field of NSC (neural stem cell) research with increasing emphasis towards its application in cell-replacement therapy for neurological disorders. However, the clinical application of NSCs will remain largely unfeasible until a comprehensive understanding of the cellular and molecular mechanisms of NSC fate specification is achieved. With this understanding will come an increased possibility to exploit the potential of stem cells in order to manufacture transplantable NSCs able to provide a safe and effective therapy for previously untreatable neurological disorders. Since the pathology of each of these disorders is determined by the loss or damage of a specific neural cell population, it may be necessary to generate a range of NSCs able to replace specific neurons or glia rather than generating a generic NSC population. Currently, a diverse range of strategies is being investigated with this goal in mind. In this review, we focus on the relationship between NSC specification and differentiation and discuss how this information may be used to direct NSCs towards a particular fate.
Resumo:
We model strategic interaction in a differentiated input market as a game among two suppliers and n retailers. Each one of the upstream firms chooses the specification of the input which it will offer.Then, retailers choose their type from a continuum of possibilities. The decisions made in these two first stages affect the degree of compatibility between each retailer's ideal input specification and that of the inputs offered by the two upstream firms. In a third stage, upstream firms compete setting input prices. Equilibrium may be of the two-vendor policy or of the technological monopoly type.
Resumo:
In a symmetric differentiated experimental oligopoly with multiproduct firms we test the predictive power of the corresponding Bertrand-Nash equilibria. Subjects are not informed on the specification of the underlying demand model. In the presence of intense multiproduct activity, and provided that a parallel pricing rule is imposed to multiproduct firms, strategies tend to confirm the non-cooperative multiproduct solution.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
Often, firms have no information on the specification of the true demand model they are faced with. It is, however, a well established fact that trial-and-error algorithms may be used by them in order to learn how to make optimal decisions. Using experimental methods, we identify a property of the information on past actions which helps the seller of two asymmetric demand substitutes to reach the optimal prices more precisely and faster. The property concerns the possibility of disaggregating changes in each product’s demand into client exit/entry and shift from one product to the other.
Resumo:
We present a new Bayesian econometric specification for a hypothetical Discrete Choice Experiment (DCE) incorporating respondent ranking information about attribute importance. Our results indicate that a DCE debriefing question that asks respondents to rank the importance of attributes helps to explain the resulting choices. We also examine how mode of survey delivery (online and mail) impacts model performance, finding that results are not substantively a§ected by the mode of survey delivery. We conclude that the ranking data is a complementary source of information about respondent utility functions within hypothetical DCEs
Resumo:
The dependence of the annual mean tropical precipitation on horizontal resolution is investigated in the atmospheric version of the Hadley Centre General Environment Model (HadGEM1). Reducing the grid spacing from about 350 km to 110 km improves the precipitation distribution in most of the tropics. In particular, characteristic dry biases over South and Southeast Asia including the Maritime Continent as well as wet biases over the western tropical oceans are reduced. The annual-mean precipitation bias is reduced by about one third over the Maritime Continent and the neighbouring ocean basins associated with it via the Walker circulation. Sensitivity experiments show that much of the improvement with resolution in the Maritime Continent region is due to the specification of better resolved surface boundary conditions (land fraction, soil and vegetation parameters) at the higher resolution. It is shown that in particular the formulation of the coastal tiling scheme may cause resolution sensitivity of the mean simulated climate. The improvement in the tropical mean precipitation in this region is not primarily associated with the better representation of orography at the higher resolution, nor with changes in the eddy transport of moisture. Sizeable sensitivity to changes in the surface fields may be one of the reasons for the large variation of the mean tropical precipitation distribution seen across climate models.
Resumo:
We analyse by simulation the impact of model-selection strategies (sometimes called pre-testing) on forecast performance in both constant-and non-constant-parameter processes. Restricted, unrestricted and selected models are compared when either of the first two might generate the data. We find little evidence that strategies such as general-to-specific induce significant over-fitting, or thereby cause forecast-failure rejection rates to greatly exceed nominal sizes. Parameter non-constancies put a premium on correct specification, but in general, model-selection effects appear to be relatively small, and progressive research is able to detect the mis-specifications.
Resumo:
This chapter explores the distinctive qualities of the Matt Smith era Doctor Who, focusing on how dramatic emphases are connected with emphases on visual style, and how this depends on the programme's production methods and technologies. Doctor Who was first made in the 1960s era of live, studio-based, multi-camera television with monochrome pictures. However, as technical innovations like colour filming, stereo sound, CGI and post-production effects technology have been routinely introduced into the programme, and now High Definition (HD) cameras, they have given Doctor Who’s creators new ways of making visually distinctive narratives. Indeed, it has been argued that since the 1980s television drama has become increasingly like cinema in its production methods and aesthetic aims. Viewers’ ability to view the programme on high-specification TV sets, and to record and repeat episodes using digital media, also encourage attention to visual style in television as much as in cinema. The chapter evaluates how these new circumstances affect what Doctor Who has become and engages with arguments that visual style has been allowed to override characterisation and story in the current Doctor Who. The chapter refers to specific episodes, and frames the analysis with reference to earlier years in Doctor Who’s long history. For example, visual spectacle using green-screen and CGI can function as a set-piece (at the opening or ending of an episode) but can also work ‘invisibly’ to render a setting realistically. Shooting on location using HD cameras provides a rich and detailed image texture, but also highlights mistakes and especially problems of lighting. The reduction of Doctor Who’s budget has led to Steven Moffat’s episodes relying less on visual extravagance, connecting back both to Russell T. Davies’s concern to show off the BBC’s investment in the series but also to reference British traditions of gritty and intimate social drama. Pressures to capitalise on Doctor Who as a branded product are the final aspect of the chapter’s analysis, where the role of Moffat as ‘showrunner’ links him to an American (not British) style of television production where the preservation of format and brand values give him unusual power over the look of the series.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out-of-sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series.