808 resultados para Empirical Predictions
Resumo:
he construction market around the world has witnessed the growing eminence of construction professional services (CPSs), such as urban planning, architecture, engineering, and consultancy, while the traditional contracting sector remains strong. Nowadays, it is not uncommon to see a design firm taking over the work of a traditional main contractor, or vice versa, of overseeing the delivery of a project. Although the two sectors of contracting and CPS share the same purpose of materializing the built environment, they are as different as they are interrelated. Much has been mentioned about the nexus between the two but little has been done to articulate it using empirical evidence. This study examined the nexus between contracting and CPS businesses by offering and testing lead-lag effects between the two sectors in the international market. A longitudinal panel data composed of 23 top international contractors and CPS firms was adopted. Surprisingly, results of the panel data analyses show that CPS business does not have a significant positive causal effect on contracting as a downstream business, and vice versa. CPS and contracting subsidiaries, although within the same company, do not necessarily form a consortium to undertake the same project; rather, they often collaborate with other CPS or contracting counterparts to undertake projects. This paper provides valuable insights into the sophisticated nexus between contracting and CPS in the international construction market. It will support business executives’ rational decision making for selecting proper contracting or CPS allies, or a proper mergers and acquisitions strategy in the international market. The paper also provides a fresh perspective through which researchers can better investigate the diversification strategies adopted by international contracting and CPS firms.
Resumo:
This study examines the feedback practices of 110 EFL teachers from five different countries (Cyprus, France, Korea, Spain, and Thailand), working in secondary school contexts. All provided feedback on the same student essay. The coding scheme developed to analyse the feedback operates on two axes: the stance the teachers assumed when providing feedback, and the focus of their feedback. Most teachers reacted as language teachers, rather than as readers of communication. The teachers overwhelmingly focused on grammar in their feedback and assumed what we called a Provider role, providing the correct forms for the student. A second role, Initiator, was also present, in which teachers indicate errors or issues to the learner but expect the learner to pick this up and work on it. This role was associated with a more even spread of feedback focus, where teachers also provided feedback on other areas, such as lexis, style and discourse.
The EAP teacher: prophet of doom or eternal optimist? EAP teachers' predictions of students' success
Resumo:
In the 1960s North Atlantic sea surface temperatures (SST) cooled rapidly. The magnitude of the cooling was largest in the North Atlantic subpolar gyre (SPG), and was coincident with a rapid freshening of the SPG. Here we analyze hindcasts of the 1960s North Atlantic cooling made with the UK Met Office’s decadal prediction system (DePreSys), which is initialised using observations. It is shown that DePreSys captures—with a lead time of several years—the observed cooling and freshening of the North Atlantic SPG. DePreSys also captures changes in SST over the wider North Atlantic and surface climate impacts over the wider region, such as changes in atmospheric circulation in winter and sea ice extent. We show that initialisation of an anomalously weak Atlantic Meridional Overturning Circulation (AMOC), and hence weak northward heat transport, is crucial for DePreSys to predict the magnitude of the observed cooling. Such an anomalously weak AMOC is not captured when ocean observations are not assimilated (i.e. it is not a forced response in this model). The freshening of the SPG is also dominated by ocean salt transport changes in DePreSys; in particular, the simulation of advective freshwater anomalies analogous to the Great Salinity Anomaly were key. Therefore, DePreSys suggests that ocean dynamics played an important role in the cooling of the North Atlantic in the 1960s, and that this event was predictable.
Resumo:
Decadal climate predictions exhibit large biases, which are often subtracted and forgotten. However, understanding the causes of bias is essential to guide efforts to improve prediction systems, and may offer additional benefits. Here the origins of biases in decadal predictions are investigated, including whether analysis of these biases might provide useful information. The focus is especially on the lead-time-dependent bias tendency. A “toy” model of a prediction system is initially developed and used to show that there are several distinct contributions to bias tendency. Contributions from sampling of internal variability and a start-time-dependent forcing bias can be estimated and removed to obtain a much improved estimate of the true bias tendency, which can provide information about errors in the underlying model and/or errors in the specification of forcings. It is argued that the true bias tendency, not the total bias tendency, should be used to adjust decadal forecasts. The methods developed are applied to decadal hindcasts of global mean temperature made using the Hadley Centre Coupled Model, version 3 (HadCM3), climate model, and it is found that this model exhibits a small positive bias tendency in the ensemble mean. When considering different model versions, it is shown that the true bias tendency is very highly correlated with both the transient climate response (TCR) and non–greenhouse gas forcing trends, and can therefore be used to obtain observationally constrained estimates of these relevant physical quantities.
Resumo:
During the last 30 years, significant debate has taken place regarding multilevel research. However, the extent to which multilevel research is overtly practiced remains to be examined. This article analyzes 10 years of organizational research within a multilevel framework (from 2001 to 2011). The goals of this article are (a) to understand what has been done, during this decade, in the field of organizational multilevel research and (b) to suggest new arenas of research for the next decade. A total of 132 articles were selected for analysis through ISI Web of Knowledge. Through a broad-based literature review, results suggest that there is equilibrium between the amount of empirical and conceptual papers regarding multilevel research, with most studies addressing the cross-level dynamics between teams and individuals. In addition, this study also found that the time still has little presence in organizational multilevel research. Implications, limitations, and future directions are addressed in the end. Organizations are made of interacting layers. That is, between layers (such as divisions, departments, teams, and individuals) there is often some degree of interdependence that leads to bottom-up and top-down influence mechanisms. Teams and organizations are contexts for the development of individual cognitions, attitudes, and behaviors (top-down effects; Kozlowski & Klein, 2000). Conversely, individual cognitions, attitudes, and behaviors can also influence the functioning and outcomes of teams and organizations (bottom-up effects; Arrow, McGrath, & Berdahl, 2000). One example is when the rewards system of one organization may influence employees’ intention to quit and the existence or absence of extra role behaviors. At the same time, many studies have showed the importance of bottom-up emergent processes that yield higher level phenomena (Bashshur, Hernández, & González-Romá, 2011; Katz-Navon & Erez, 2005; Marques-Quinteiro, Curral, Passos, & Lewis, in press). For example, the affectivity of individual employees may influence their team’s interactions and outcomes (Costa, Passos, & Bakker, 2012). Several authors agree that organizations must be understood as multilevel systems, meaning that adopting a multilevel perspective is fundamental to understand real-world phenomena (Kozlowski & Klein, 2000). However, whether this agreement is reflected in practicing multilevel research seems to be less clear. In fact, how much is known about the quantity and quality of multilevel research done in the last decade? The aim of this study is to compare what has been proposed theoretically, concerning the importance of multilevel research, with what has really been empirically studied and published. First, this article outlines a review of the multilevel theory, followed by what has been theoretically “put forward” by researchers. Second, this article presents what has really been “practiced” based on the results of a review of multilevel studies published from 2001 to 2011 in business and management journals. Finally, some barriers and challenges to true multilevel research are suggested. This study contributes to multilevel research as it describes the last 10 years of research. It quantitatively depicts the type of articles being written, and where we can find the majority of the publications on empirical and conceptual work related to multilevel thinking.
Resumo:
Purpose – Today marketers operate in globalised markets, planning new ways to engage with domestic and foreign customers alike. While there is a greater need to understand these two customer groups, few studies examine the impact of customer engagement tactics on the two customer groups, focusing on their perceptual differences. Even less attention is given to customer engagement tactics in a cross-cultural framework. In this research, the authors investigate customers in China and UK, aiming to compare their perceptual differences on the impact of multiple customer engagement tactics. Design/methodology/approach – Using a quantitative approach with 286 usable responses from China and the UK obtained through a combination of person-administered survey and computer-based survey screening process, the authors test a series of hypotheses to distinguish across-cultural differences. Findings – Findings show that the collectivists (Chinese customers) perceive customer engagement tactics differently than the individualists (UK customers). The Chinese customers are more sensitive to price and reputation, whereas the UK customers respond more strongly to service, communication and customisation. Chinese customers’ concerns with extensive price and reputation comparisons may be explained by their awareness towards face (status), increased self-expression and equality. Practical implications – The findings challenge the conventional practice of using similar customer engagement tactics for a specific market place with little concern for multiple cultural backgrounds. The paper proposes strategies for marketers facing challenges in this globalised context. Originality/value – Several contributions have been made to the literatures. First, the study showed the effects of culture on the customers’ perceptual differences. Second, the study provided more information to clarify customers’ different reactions towards customer engagement tactics, highlighted by concerns towards face and status. Third, the study provided empirical evidence to support the use of multiple customer engagement tactics to the across cultural studies.
Resumo:
Purpose – This paper aims to provide a synthetic review of the empirical literature on the multinational enterprise (MNE), subsidiaries and performance. Design/methodology/approach – The paper examines the following: the theoretical and conceptual foundation of multinationality (M) and performance (P) measures; the impact of MNE strategic investment motives on performance; the influence of contextual external and internal environment factors on performance; the strategy to optimize value chain activities of the MNE by cooperating with external partners in an asymmetric network, the key drivers of enhanced shareholder value and the implications of performance; and the need to access primary data provided by firms and managers themselves when analyzing the internal functioning of the MNE and its subsidiaries. Findings – The overall message from this literature review is that empirical research should be designed on the basis of relevant theoretical and conceptual foundations of the performance construct. Originality/value – The paper provides a systematic and synthetic review of theoretical and empirical literature.
Resumo:
Current European Union regulatory risk assessment allows application of pesticides provided that recovery of nontarget arthropods in-crop occurs within a year. Despite the long-established theory of source-sink dynamics, risk assessment ignores depletion of surrounding populations and typical field trials are restricted to plot-scale experiments. In the present study, the authors used agent-based modeling of 2 contrasting invertebrates, a spider and a beetle, to assess how the area of pesticide application and environmental half-life affect the assessment of recovery at the plot scale and impact the population at the landscape scale. Small-scale plot experiments were simulated for pesticides with different application rates and environmental half-lives. The same pesticides were then evaluated at the landscape scale (10 km × 10 km) assuming continuous year-on-year usage. The authors' results show that recovery time estimated from plot experiments is a poor indicator of long-term population impact at the landscape level and that the spatial scale of pesticide application strongly determines population-level impact. This raises serious doubts as to the utility of plot-recovery experiments in pesticide regulatory risk assessment for population-level protection. Predictions from the model are supported by empirical evidence from a series of studies carried out in the decade starting in 1988. The issues raised then can now be addressed using simulation. Prediction of impacts at landscape scales should be more widely used in assessing the risks posed by environmental stressors.
Resumo:
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single ‘deterministic’ forecasts. Here, the UTCI is computed on a global scale,which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.
Resumo:
We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.
Resumo:
Numerical simulations are presented of the ion distribution functions seen by middle-altitude spacecraft in the low-latitude boundary layer (LLBL) and cusp regions when reconnection is, or has recently been, taking place at the equatorial magnetopause. From the evolution of the distribution function with time elapsed since the field line was opened, both the observed energy/observation-time and pitch-angle/energy dispersions are well reproduced. Distribution functions showing a mixture of magnetosheath and magnetospheric ions, often thought to be a signature of the LLBL, are found on newly opened field lines as a natural consequence of the magnetopause effects on the ions and their flight times. In addition, it is shown that the extent of the source region of the magnetosheath ions that are detected by a satellite is a function of the sensitivity of the ion instrument . If the instrument one-count level is high (and/or solar-wind densities are low), the cusp ion precipitation detected comes from a localised region of the mid-latitude magnetopause (around the magnetic cusp), even though the reconnection takes place at the equatorial magnetopause. However, if the instrument sensitivity is high enough, then ions injected from a large segment of the dayside magnetosphere (in the relevant hemisphere) will be detected in the cusp. Ion precipitation classed as LLBL is shown to arise from the low-latitude magnetopause, irrespective of the instrument sensitivity. Adoption of threshold flux definitions has the same effect as instrument sensitivity in artificially restricting the apparent source region.
Resumo:
The recent identification of non-thermal plasmas using EISCAT data has been made possible by their occurrence during large, short-lived flow bursts. For steady, yet rapid, ion convection the only available signature is the shape of the spectrum, which is unreliable because it is open to distortion by noise and sampling uncertainty and can be mimicked by other phenomena. Nevertheless, spectral shape does give an indication of the presence of non-thermal plasma, and the characteristic shape has been observed for long periods (of the order of an hour or more) in some experiments. To evaluate this type of event properly one needs to compare it to what would be expected theoretically. Predictions have been made using the coupled thermosphere-ionosphere model developed at University College London and the University of Sheffield to show where and when non-Maxwellian plasmas would be expected in the auroral zone. Geometrical and other factors then govern whether these are detectable by radar. The results are applicable to any incoherent scatter radar in this area, but the work presented here concentrates on predictions with regard to experiments on the EISCAT facility.
Resumo:
This paper examines the time-varying nature of price discovery in eighteenth century cross-listed stocks. Specifically, we investigate how quickly news is reflected in prices for two of the great moneyed com- panies, the Bank of England and the East India Company, over the period 1723 to 1794. These British companies were cross-listed on the London and Amsterdam stock exchange and news between the capitals flowed mainly via the use of boats that transported mail. We examine in detail the historical context sur- rounding the defining events of the period, and use these as a guide to how the data should be analysed. We show that both trading venues contributed to price discovery, and although the London venue was more important for these stocks, its importance varies over time.