879 resultados para Trend tests


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years there has been a rapid growth of interest in exploring the relationship between nutritional therapies and the maintenance of cognitive function in adulthood. Emerging evidence reveals an increasingly complex picture with respect to the benefits of various food constituents on learning, memory and psychomotor function in adults. However, to date, there has been little consensus in human studies on the range of cognitive domains to be tested or the particular tests to be employed. To illustrate the potential difficulties that this poses, we conducted a systematic review of existing human adult randomised controlled trial (RCT) studies that have investigated the effects of 24 d to 36 months of supplementation with flavonoids and micronutrients on cognitive performance. There were thirty-nine studies employing a total of 121 different cognitive tasks that met the criteria for inclusion. Results showed that less than half of these studies reported positive effects of treatment, with some important cognitive domains either under-represented or not explored at all. Although there was some evidence of sensitivity to nutritional supplementation in a number of domains (for example, executive function, spatial working memory), interpretation is currently difficult given the prevailing 'scattergun approach' for selecting cognitive tests. Specifically, the practice means that it is often difficult to distinguish between a boundary condition for a particular nutrient and a lack of task sensitivity. We argue that for significant future progress to be made, researchers need to pay much closer attention to existing human RCT and animal data, as well as to more basic issues surrounding task sensitivity, statistical power and type I error.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The applicability of AI methods to the Chagas' disease diagnosis is carried out by the use of Kohonen's self-organizing feature maps. Electrodiagnosis indicators calculated from ECG records are used as features in input vectors to train the network. Cross-validation results are used to modify the maps, providing an outstanding improvement to the interpretation of the resulting output. As a result, the map might be used to reduce the need for invasive explorations in chronic Chagas' disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) project has produced a global data-set of cloud and aerosol properties from the Along Track Scanning Radiometer-2 (ATSR-2) instrument, covering the time period 1995�2001. This paper presents the validation of aerosol optical depths (AODs) over the ocean from this product against AERONET sun-photometer measurements, as well as a comparison to the Advanced Very High Resolution Radiometer (AVHRR) optical depth product produced by the Global Aerosol Climatology Project (GACP). The GRAPE AOD over ocean is found to be in good agreement with AERONET measurements, with a Pearson's correlation coefficient of 0.79 and a best-fit slope of 1.0±0.1, but with a positive bias of 0.08±0.04. Although the GRAPE and GACP datasets show reasonable agreement, there are significant differences. These discrepancies are explored, and suggest that the downward trend in AOD reported by GACP may arise from changes in sampling due to the orbital drift of the AVHRR instruments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blood clotting response (BCR) resistance tests are available for a number of anticoagulant rodenticides. However, during the development of these tests many of the test parameters have been changed, making meaningful comparisons between results difficult. It was recognised that a standard methodology was urgently required for future BCR resistance tests and, accordingly, this document presents a reappraisal of published tests, and proposes a standard protocol for future use (see Appendix). The protocol can be used to provide information on the incidence and degree of resistance in a particular rodent population; to provide a simple comparison of resistance factors between active ingredients, thus giving clear information about cross-resistance for any given strain; and to provide comparisons of susceptibility or resistance between different populations. The methodology has a sound statistical basis in being based on the ED50 response, and requires many fewer animals than the resistance tests in current use. Most importantly, tests can be used to give a clear indication of the likely practical impact of the resistance on field efficacy. The present study was commissioned and funded by the Rodenticide Resistance Action Committee (RRAC) of CropLife International.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate for 26 OECD economies whether their current account imbalances to GDP are driven by stochastic trends. Regarding bounded stationarity as the more natural counterpart of sustainability, results from Phillips–Perron tests for unit root and bounded unit root processes are contrasted. While the former hint at stationarity of current account imbalances for 12 economies, the latter indicate bounded stationarity for only six economies. Through panel-based test statistics, current account imbalances are diagnosed as bounded non-stationary. Thus, (spurious) rejections of the unit root hypothesis might be due to the existence of bounds reflecting hidden policy controls or financial crises.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Total ozone trends are typically studied using linear regression models that assume a first-order autoregression of the residuals [so-called AR(1) models]. We consider total ozone time series over 60°S–60°N from 1979 to 2005 and show that most latitude bands exhibit long-range correlated (LRC) behavior, meaning that ozone autocorrelation functions decay by a power law rather than exponentially as in AR(1). At such latitudes the uncertainties of total ozone trends are greater than those obtained from AR(1) models and the expected time required to detect ozone recovery correspondingly longer. We find no evidence of LRC behavior in southern middle-and high-subpolar latitudes (45°–60°S), where the long-term ozone decline attributable to anthropogenic chlorine is the greatest. We thus confirm an earlier prediction based on an AR(1) analysis that this region (especially the highest latitudes, and especially the South Atlantic) is the optimal location for the detection of ozone recovery, with a statistically significant ozone increase attributable to chlorine likely to be detectable by the end of the next decade. In northern middle and high latitudes, on the other hand, there is clear evidence of LRC behavior. This increases the uncertainties on the long-term trend attributable to anthropogenic chlorine by about a factor of 1.5 and lengthens the expected time to detect ozone recovery by a similar amount (from ∼2030 to ∼2045). If the long-term changes in ozone are instead fit by a piecewise-linear trend rather than by stratospheric chlorine loading, then the strong decrease of northern middle- and high-latitude ozone during the first half of the 1990s and its subsequent increase in the second half of the 1990s projects more strongly on the trend and makes a smaller contribution to the noise. This both increases the trend and weakens the LRC behavior at these latitudes, to the extent that ozone recovery (according to this model, and in the sense of a statistically significant ozone increase) is already on the verge of being detected. The implications of this rather controversial interpretation are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hourly data (1994–2009) of surface ozone concentrations at eight monitoring sites have been investigated to assess target level and long–term objective exceedances and their trends. The European Union (EU) ozone target value for human health (60 ppb–maximum daily 8–hour running mean) has been exceeded for a number of years for almost all sites but never exceeded the set limit of 25 exceedances in one year. Second highest annual hourly and 4th highest annual 8–hourly mean ozone concentrations have shown a statistically significant negative trend for in–land sites of Cork–Glashaboy, Monaghan and Lough Navar and no significant trend for the Mace Head site. Peak afternoon ozone concentrations averaged over a three year period from 2007 to 2009 have been found to be lower than corresponding values over a three–year period from 1996 to 1998 for two sites: Cork–Glashaboy and Lough Navar sites. The EU long–term objective value of AOT40 (Accumulated Ozone Exposure over a threshold of 40 ppb) for protection of vegetation (3 ppm–hour, calculated from May to July) has been exceeded, on an individual year basis, for two sites: Mace Head and Valentia. The critical level for the protection of forest (10 ppm–hour from April to September) has not been exceeded for any site except at Valentia in the year 2003. AOT40–Vegetation shows a significant negative trend for a 3–year running average at Cork–Glashaboy (–0.13±0.02 ppm–hour per year), at Lough Navar (–0.05±0.02 ppm–hour per year) and at Monaghan (–0.03±0.03 ppm–hour per year–not statistically significant) sites. No statistically significant trend was observed for the coastal site of Mace head. Overall, with the exception of the Mace Head and Monaghan sites, ozone measurement records at Irish sites show a downward negative trend in peak values that affect human health and vegetation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tests, as learning events, are often more effective than are additional study opportunities, especially when recall is tested after a long retention interval. To what degree, though, do prior test or study events support subsequent study activities? We set out to test an implication of Bjork and Bjork’s (1992) new theory of disuse—that, under some circumstances, prior study may facilitate subsequent study more than does prior testing. Participants learned English–Swahili translations and then underwent a practice phase during which some items were tested (without feedback) and other items were restudied. Although tested items were better recalled after a 1-week delay than were restudied items, this benefit did not persist after participants had the opportunity to study the items again via feedback. In fact, after this additional study opportunity, items that had been restudied earlier were better recalled than were items that had been tested earlier. These results suggest that measuring the memorial consequences of testing requires more than a single test of retention and, theoretically, a consideration of the differing status of initially recallable and nonrecallable items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, change in rainfall, temperature and river discharge are analysed over the last three decades in Central Vietnam. Trends and rainfall indices are evaluated using non-parametric tests at different temporal levels. To overcome the sparse locally available network, the high resolution APHRODITE gridded dataset is used in addition to the existing rain gauges. Finally, existing linkages between discharge changes and trends in rainfall and temperature are explored. Results are indicative of an intensification of rainfall (+15%/decade), with more extreme and longer events. A significant increase in winter rainfall and a decrease in consecutive dry days provides strong evidence for a lengthening wet season in Central Vietnam. In addition, trends based on APHRODITE suggest a strong orographic signal in winter and annual trends. These results underline the local variability in the impacts of climatic change at the global scale. Consequently, it is important that change detection investigations are conducted at the local scale. A very weak signal is detected in the trend of minimum temperature (+0.2°C/decade). River discharge trends show an increase in mean discharge (31 to 35%/decade) over the last decades. Between 54 and 74% of this increase is explained by the increase in precipitation. The maximum discharge also responds significantly to precipitation changes leading to a lengthened wet season and an increase in extreme rainfall events. Such trends can be linked with a likely increase in floods in Central Vietnam, which is important for future adaptation planning and management and flood preparedness in the region. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We test whether there are nonlinearities in the response of short- and long-term interest rates to the spread in interest rates, and assess the out-of-sample predictability of interest rates using linear and nonlinear models. We find strong evidence of nonlinearities in the response of interest rates to the spread. Nonlinearities are shown to result in more accurate short-horizon forecasts, especially of the spread.