50 resultados para Smoothed ANOVA
Resumo:
This article examines the potential to improve numerical weather prediction (NWP) by estimating upper and lower bounds on predictability by re-visiting the original study of Lorenz (1982) but applied to the most recent version of the European Centre for Medium Range Weather Forecasts (ECMWF) forecast system, for both the deterministic and ensemble prediction systems (EPS). These bounds are contrasted with an older version of the same NWP system to see how they have changed with improvements to the NWP system. The computations were performed for the earlier seasons of DJF 1985/1986 and JJA 1986 and the later seasons of DJF 2010/2011 and JJA 2011 using the 500-hPa geopotential height field. Results indicate that for this field, we may be approaching the limit of deterministic forecasting so that further improvements might only be obtained by improving the initial state. The results also show that predictability calculations with earlier versions of the model may overestimate potential forecast skill, which may be due to insufficient internal variability in the model and because recent versions of the model are more realistic in representing the true atmospheric evolution. The same methodology is applied to the EPS to calculate upper and lower bounds of predictability of the ensemble mean forecast in order to explore how ensemble forecasting could extend the limits of the deterministic forecast. The results show that there is a large potential to improve the ensemble predictions, but for the increased predictability of the ensemble mean, there will be a trade-off in information as the forecasts will become increasingly smoothed with time. From around the 10-d forecast time, the ensemble mean begins to converge towards climatology. Until this point, the ensemble mean is able to predict the main features of the large-scale flow accurately and with high consistency from one forecast cycle to the next. By the 15-d forecast time, the ensemble mean has lost information with the anomaly of the flow strongly smoothed out. In contrast, the control forecast is much less consistent from run to run, but provides more detailed (unsmoothed) but less useful information.
Resumo:
Sea surface temperature (SST) can be estimated from day and night observations of the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) by optimal estimation (OE). We show that exploiting the 8.7 μm channel, in addition to the “traditional” wavelengths of 10.8 and 12.0 μm, improves OE SST retrieval statistics in validation. However, the main benefit is an improvement in the sensitivity of the SST estimate to variability in true SST. In a fair, single-pixel comparison, the 3-channel OE gives better results than the SST estimation technique presently operational within the Ocean and Sea Ice Satellite Application Facility. This operational technique is to use SST retrieval coefficients, followed by a bias-correction step informed by radiative transfer simulation. However, the operational technique has an additional “atmospheric correction smoothing”, which improves its noise performance, and hitherto had no analogue within the OE framework. Here, we propose an analogue to atmospheric correction smoothing, based on the expectation that atmospheric total column water vapour has a longer spatial correlation length scale than SST features. The approach extends the observations input to the OE to include the averaged brightness temperatures (BTs) of nearby clear-sky pixels, in addition to the BTs of the pixel for which SST is being retrieved. The retrieved quantities are then the single-pixel SST and the clear-sky total column water vapour averaged over the vicinity of the pixel. This reduces the noise in the retrieved SST significantly. The robust standard deviation of the new OE SST compared to matched drifting buoys becomes 0.39 K for all data. The smoothed OE gives SST sensitivity of 98% on average. This means that diurnal temperature variability and ocean frontal gradients are more faithfully estimated, and that the influence of the prior SST used is minimal (2%). This benefit is not available using traditional atmospheric correction smoothing.
Resumo:
As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review article provides a review of a class of newer statistical models that have not yet been widely adopted in the field, but have garnered interest in other fields of language research. The class of statistical models called mixed-effects models are introduced, and the potential benefits of these models for the second language researcher are discussed. A simple example of mixed-effects data analysis using the statistical software package R (R Development Core Team, 2011) is provided as an introduction to the use of these statistical techniques, and to exemplify how such analyses can be reported in research articles. It is concluded that mixed-effects models provide the second language researcher with a powerful tool for the analysis of a variety of types of second language acquisition data.
Resumo:
This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.
Resumo:
The present study aims to contribute to an understanding of the complexity of lobbying activities within the accounting standard-setting process in the UK. The paper reports detailed content analysis of submission letters to four related exposure drafts. These preceded two accounting standards that set out the concept of control used to determine the scope of consolidation in the UK, except for reporting under international standards. Regulation on the concept of control provides rich patterns of lobbying behaviour due to its controversial nature and its significance to financial reporting. Our examination is conducted by dividing lobbyists into two categories, corporate and non-corporate, which are hypothesised (and demonstrated) to lobby differently. In order to test the significance of these differences we apply ANOVA techniques and univariate regression analysis. Corporate respondents are found to devote more attention to issues of specific applicability of the concept of control, whereas non-corporate respondents tend to devote more attention to issues of general applicability of this concept. A strong association between the issues raised by corporate respondents and their line of business is revealed. Both categories of lobbyists are found to advance conceptually-based arguments more often than economic consequences-based or combined arguments. However, when economic consequences-based arguments are used, they come exclusively from the corporate category of respondents.
Resumo:
We compare and contrast the accuracy and uncertainty in forecasts of rents with those for a variety of macroeconomic series. The results show that in general forecasters tend to be marginally more accurate in the case of macro-economic series than with rents. In common across all of the series, forecasts tend to be smoothed with forecasters under-estimating performance during economic booms, and vice-versa in recessions We find that property forecasts are affected by economic uncertainty, as measured by disagreement across the macro-forecasters. Increased uncertainty leads to increased dispersion in the rental forecasts and a reduction in forecast accuracy.
Resumo:
Purpose – There is a wealth of studies which suggest that managers' positive perceptions/expectations can considerably influence the organisational performance; unfortunately, little empirical evidence has been obtained from development studies. This research aims to focus on the perceptual and behavioural trait differences of successful and unsuccessful aid workers, and their relationship with organisational performance. Design/methodology/approach – Through web-based survey, 244 valid responses were obtained from the Japan International Cooperation Agency (JICA)-aid managers worldwide. Five perception related factors were extracted and used for cluster analysis to group the respondents. Each cluster's perception/behaviour-related factors and organisational performance variables were compared by ANOVA. Findings – It was discovered that Japanese's positive perception/expectation about work and their local colleagues was related to higher organisational performance, and conversely, the negative perception on their part was generally associated with negative behaviour and lower organisational performance. Moreover, in a development context, lower work-related stress and feelings of resignation toward work were strongly associated with the acceptability of cross-cultural work environment. Practical implications – The differences in perceptual tendencies suggest that cautious consideration is advised since these findings may mainly apply to Japanese aid managers. However, as human nature is universal, positive perception and behaviour would bring out positive output in most organisations. Originality/value – This study extended the contextualised “Pygmalion effect” and has clarified the influence of perception/expectation on counter-part behaviour and organisational performance in development aid context, where people-related issues have often been ignored. This first-time research provides imperial data on the significant role of positive perception on the incumbent role holder.
Resumo:
Reliable evidence of trends in the illegal ivory trade is important for informing decision making for elephants but it is difficult to obtain due to the covert nature of the trade. The Elephant Trade Information System, a global database of reported seizures of illegal ivory, holds the only extensive information on illicit trade available. However inherent biases in seizure data make it difficult to infer trends; countries differ in their ability to make and report seizures and these differences cannot be directly measured. We developed a new modelling framework to provide quantitative evidence on trends in the illegal ivory trade from seizures data. The framework used Bayesian hierarchical latent variable models to reduce bias in seizures data by identifying proxy variables that describe the variability in seizure and reporting rates between countries and over time. Models produced bias-adjusted smoothed estimates of relative trends in illegal ivory activity for raw and worked ivory in three weight classes. Activity is represented by two indicators describing the number of illegal ivory transactions--Transactions Index--and the total weight of illegal ivory transactions--Weights Index--at global, regional or national levels. Globally, activity was found to be rapidly increasing and at its highest level for 16 years, more than doubling from 2007 to 2011 and tripling from 1998 to 2011. Over 70% of the Transactions Index is from shipments of worked ivory weighing less than 10 kg and the rapid increase since 2007 is mainly due to increased consumption in China. Over 70% of the Weights Index is from shipments of raw ivory weighing at least 100 kg mainly moving from Central and East Africa to Southeast and East Asia. The results tie together recent findings on trends in poaching rates, declining populations and consumption and provide detailed evidence to inform international decision making on elephants.
Resumo:
Background Event-related desynchronization/synchronization (ERD/ERS) is a relative power decrease/increase of electroencephalogram (EEG) in a specific frequency band during physical motor execution and mental motor imagery, thus it is widely used for the brain-computer interface (BCI) purpose. However what the ERD really reflects and its frequency band specific role have not been agreed and are under investigation. Understanding the underlying mechanism which causes a significant ERD would be crucial to improve the reliability of the ERD-based BCI. We systematically investigated the relationship between conditions of actual repetitive hand movements and resulting ERD. Methods Eleven healthy young participants were asked to close/open their right hand repetitively at three different speeds (Hold, 1/3 Hz, and 1 Hz) and four distinct motor loads (0, 2, 10, and 15 kgf). In each condition, participants repeated 20 experimental trials, each of which consisted of rest (8–10 s), preparation (1 s) and task (6 s) periods. Under the Hold condition, participants were instructed to keep clenching their hand (i.e., isometric contraction) during the task period. Throughout the experiment, EEG signals were recorded from left and right motor areas for offline data analysis. We obtained time courses of EEG power spectrum to discuss the modulation of mu and beta-ERD/ERS due to the task conditions. Results We confirmed salient mu-ERD (8–13 Hz) and slightly weak beta-ERD (14–30 Hz) on both hemispheres during repetitive hand grasping movements. According to a 3 × 4 ANOVA (speed × motor load), both mu and beta-ERD during the task period were significantly weakened under the Hold condition, whereas no significant difference in the kinetics levels and interaction effect was observed. Conclusions This study investigates the effect of changes in kinematics and kinetics on resulting ERD during repetitive hand grasping movements. The experimental results suggest that the strength of ERD may reflect the time differentiation of hand postures in motor planning process or the variation of proprioception resulting from hand movements, rather than the motor command generated in the down stream, which recruits a group of motor neurons.
Resumo:
Purpose The sensitivity of soil organic carbon to global change drivers, according to the depth profile, is receiving increasing attention because of its importance in the global carbon cycle and its potential feedback to climate change. A better knowledge of the vertical distribution of SOC and its controlling factors—the aim of this study—will help scientists predict the consequences of global change. Materials and methods The study area was the Murcia Province (S.E. Spain) under semiarid Mediterranean conditions. The database used consists of 312 soil profiles collected in a systematic grid, each 12 km2 covering a total area of 11,004 km2. Statistical analysis to study the relationships between SOC concentration and control factors in different soil use scenarios was conducted at fixed depths of 0–20, 20–40, 40–60, and 60–100 cm. Results and discussion SOC concentration in the top 40 cm ranged between 6.1 and 31.5 g kg−1, with significant differences according to land use, soil type and lithology, while below this depth, no differences were observed (SOC concentration 2.1–6.8 g kg−1). The ANOVA showed that land use was the most important factor controlling SOC concentration in the 0–40 cm depth. Significant differences were found in the relative importance of environmental and textural factors according to land use and soil depth. In forestland, mean annual precipitation and texture were the main predictors of SOC, while in cropland and shrubland, the main predictors were mean annual temperature and lithology. Total SOC stored in the top 1 m in the region was about 79 Tg with a low mean density of 7.18 kg Cm−3. The vertical distribution of SOC was shallower in forestland and deeper in cropland. A reduction in rainfall would lead to SOC decrease in forestland and shrubland, and an increase of mean annual temperature would adversely affect SOC in croplands and shrubland. With increasing depth, the relative importance of climatic factors decreases and texture becomes more important in controlling SOC in all land uses. Conclusions Due to climate change, impacts will be much greater in surface SOC, the strategies for C sequestration should be focused on subsoil sequestration, which was hindered in forestland due to bedrock limitations to soil depth. In these conditions, sequestration in cropland through appropriate management practices is recommended.
Resumo:
Recent studies of the variation of geomagnetic activity over the past 140 years have quantified the "coronal source" or "open" magnetic flux F-s that leaves the solar atmosphere and enters the heliosphere and have shown that it has risen, on average, by 34% since 1963 and by 140% since 1900. This variation is reflected in studies of the heliospheric field using isotopes deposited in ice sheets and meteorites by the action of galactic comic rays. The variation has also been reproduced using a model that demonstrates how the open flux accumulates and decays, depending on the rate of flux emergence in active regions and on the length of the solar cycle. The cosmic ray flux at energies > 3 GeV is found to have decayed by about 15% during the 20(th) century (and by about 4% at > 13 GeV). We show that the changes in the open flux do reflect changes in the photospheric and sub-surface field which offers an explanation of why open flux appears to be a good proxy for solar irradiance extrapolation. Correlations between F-s, solar cycle length, L, and 11-year smoothed sunspot number, R-11, explain why the various irradiance reconstructions for the last 150 years are similar in form. Possible implications of the inferred changes in cosmic ray flux and irradiance for global temperatures on Earth are discussed.
Resumo:
The Sun-Earth connection is studied using long-term measurements from the Sun and from the Earth. The auroral activity is shown to correlate to high accuracy with the smoothed sunspot numbers. Similarly, both geomagnetic activity and global surface temperature anomaly can be linked to cyclic changes in the solar activity. The interlinked variations in the solar magnetic activity and in the solar irradiance cause effects that can be observed both in the Earth's biosphere and in the electromagnetic environment. The long-term data sets suggest that the increase in geomagnetic activity and surface temperatures are related (at least partially) to longer-term solar variations, which probably include an increasing trend superposed with a cyclic behavior with a period of about 90 years.
Resumo:
We present an analysis of the accuracy of the method introduced by Lockwood et al. (1994) for the determination of the magnetopause reconnection rate from the dispersion of precipitating ions in the ionospheric cusp region. Tests are made by applying the method to synthesised data. The simulated cusp ion precipitation data are produced by an analytic model of the evolution of newly-opened field lines, along which magnetosheath ions are firstly injected across the magnetopause and then dispersed as they propagate into the ionosphere. The rate at which these newly opened field lines are generated by reconnection can be varied. The derived reconnection rate estimates are then compared with the input variation to the model and the accuracy of the method assessed. Results are presented for steady-state reconnection, for continuous reconnection showing a sine-wave variation in rate and for reconnection which only occurs in square wave pulses. It is found that the method always yields the total flux reconnected (per unit length of the open-closed field-line boundary) to within an accuracy of better than 5%, but that pulses tend to be smoothed so that the peak reconnection rate within the pulse is underestimated and the pulse length is overestimated. This smoothing is reduced if the separation between energy channels of the instrument is reduced; however this also acts to increase the experimental uncertainty in the estimates, an effect which can be countered by improving the time resolution of the observations. The limited time resolution of the data is shown to set a minimum reconnection rate below which the method gives spurious short-period oscillations about the true value. Various examples of reconnection rate variations derived from cusp observations are discussed in the light of this analysis.
Resumo:
The concept of zero-flow equilibria of the magnetosphere-ionosphere system leads to a large number of predictions concerning the ionospheric signatures of pulsed magnetopause reconnection. These include: poleward-moving F-region electron temperature enhancements and associated transient 630nm emission; associated poleward plasma flow which, compared to the pulsed variation of the reconnection rate, is highly smoothed by induction effects; oscillatory latitudinal motion of the open/closed field line boundary; phase lag of plasma flow enhancements after equatorward motions of the boundary; azimuthal plasma flow bursts, coincident in time and space with the 630nm-dominant auroral transients, only when the magnitude of the By component of the interplanetary magnetic field (IMF) is large; azimuthal-then-poleward motion of 630nm-dominant transients at a velocity which at all times equals the internal plasma flow velocity; 557.7nm-dominant transients on one edge of the 630nm-dominant transient (initially, and for large |By|, on the poleward or equatorward edge depending on the polarity of IMF By); tailward expansion of the flow response at several km s-1; and discrete steps in the cusp ion dispersion signature between the polewardmoving structures. This paper discusses these predictions and how all have recently been confirmed by combinations of observations by optical instruments on the Svalbard Islands, the EISCAT radars and the DMSP and DE satellites.
Resumo:
This paper investigates the effect on balance of a number of Schur product-type localization schemes which have been designed with the primary function of reducing spurious far-field correlations in forecast error statistics. The localization schemes studied comprise a non-adaptive scheme (where the moderation matrix is decomposed in a spectral basis), and two adaptive schemes, namely a simplified version of SENCORP (Smoothed ENsemble COrrelations Raised to a Power) and ECO-RAP (Ensemble COrrelations Raised to A Power). The paper shows, we believe for the first time, how the degree of balance (geostrophic and hydrostatic) implied by the error covariance matrices localized by these schemes can be diagnosed. Here it is considered that an effective localization scheme is one that reduces spurious correlations adequately but also minimizes disruption of balance (where the 'correct' degree of balance or imbalance is assumed to be possessed by the unlocalized ensemble). By varying free parameters that describe each scheme (e.g. the degree of truncation in the schemes that use the spectral basis, the 'order' of each scheme, and the degree of ensemble smoothing), it is found that a particular configuration of the ECO-RAP scheme is best suited to the convective-scale system studied. According to our diagnostics this ECO-RAP configuration still weakens geostrophic and hydrostatic balance, but overall this is less so than for other schemes.