992 resultados para temporal aggregation
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
This paper reinterprets results of Ohanissian et al (2003) to show the asymptotic equivalence of temporally aggregating series and using less bandwidth in estimating long memory by Geweke and Porter-Hudak’s (1983) estimator, provided that the same number of periodogram ordinates is used in both cases. This equivalence is in the sense that their joint distribution is asymptotically normal with common mean and variance and unity correlation. Furthermore, I prove that the same applies to the estimator of Robinson (1995). Monte Carlo simulations show that this asymptotic equivalence is a good approximation in finite samples. Moreover, a real example with the daily US Dollar/French Franc exchange rate series is provided.
Resumo:
Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual) using hierarchical models that assumed a Poisson, negative binomial (NB) or zero-inflated negative binomial (ZINB) count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1) in the wrong direction, or (2) with credible intervals that excluded the true trend, as compared with hourly and daily counts.
Resumo:
Byers, D., Peel, D., Thomas, D. (2007). Habit, aggregation and long memory: Evidence from television audience data. Applied Economics, 39 (3), 321-327. RAE2008
Resumo:
Chambers (1998) explores the interaction between long memory and aggregation. For continuous-time processes, he takes the aliasing effect into account when studying temporal aggregation. For discrete-time processes, however, he seems to fail to do so. This note gives the spectral density function of temporally aggregated long memory discrete-time processes in light of the aliasing effect. The results are different from those in Chambers (1998) and are supported by a small simulation exercise. As a result, the order of aggregation may not be invariant to temporal aggregation, specifically if d is negative and the aggregation is of the stock type.
Resumo:
n learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
This paper derives the spectral density function of aggregated long memory processes in light of the aliasing effect. The results are different from previous analyses in the literature and a small simulation exercise provides evidence in our favour. The main result point to that flow aggregates from long memory processes shall be less biased than stock ones, although both retain the degree of long memory. This result is illustrated with the daily US Dollar/ French Franc exchange rate series.
Resumo:
This paper derives the spectral density function of aggregated long memory processes in light of the aliasing effect. The results are different from previous analyses in the literature and a small simulation exercise provides evidence in our favour. The main result point to that flow aggregates from long memory processes shall be less biased than stock ones, although both retain the degree of long memory. This result is illustrated with the daily US Dollar/ French Franc exchange rate series.
Resumo:
In recent years, the concept of a composite performance index, brought from economic and business statistics, has gained popularity in the field of road safety. The construction of the Composite Safety Performance Index (CSPI) involves the following key steps: the selection of the most appropriate indicators to be aggregated and the method used to aggregate them.
Over the last decade, various aggregation methods for estimating the CSPI have been suggested in the literature. However, recent studies indicates that most of these methods suffer from many deficiencies at both the theoretical and operational level; these include the correlation and compensability between indicators, as well as their high “degree of freedom” which enables one to readily manipulate them to produce desired outcomes.
The purpose of this study is to introduce an alternative aggregation method for the estimation of the CSPI, which is free from the aforementioned deficiencies. In contrast with the current aggregation methods, which generally use linear combinations of road safety indicators to estimate a CSPI, the approach advocated in this study is based on non-linear combinations of indicators and can be summarized into the following two main steps: the pairwise comparison of road safety indicators and the development of marginal and composite road safety performance functions. The introduced method has been successfully applied to identify and rank temporal and spatial hotspots for Northern Ireland, using road traffic collision data recorded in the UK STATs19 database. The obtained results highlight the promising features of the proposed approach including its stability and consistency, which enables significantly reduced deficiencies associated with the current aggregation methods. Progressively, the introduced method could evolve into an intelligent support system for road safety assessment.
Resumo:
Objective: The authors quantified nonverbal synchrony—the coordination of patient's and therapist's movement—in a random sample of same-sex psychotherapy dyads. The authors contrasted nonverbal synchrony in these dyads with a control condition and assessed its association with session-level and overall psychotherapy outcome. Method: Using an automated objective video analysis algorithm (Motion Energy Analysis; MEA), the authors calculated nonverbal synchrony in (n = 104) videotaped psychotherapy sessions from 70 Caucasian patients (37 women, 33 men, mean age = 36.5 years, SD = 10.2) treated at an outpatient psychotherapy clinic. The sample was randomly drawn from an archive (N = 301) of routinely videotaped psychotherapies. Patients and their therapists assessed session impact with self-report postsession questionnaires. A battery of pre- and postsymptomatology questionnaires measured therapy effectiveness. Results: The authors found that nonverbal synchrony is higher in genuine interactions contrasted with pseudointeractions (a control condition generated by a specifically designed shuffling procedure). Furthermore, nonverbal synchrony is associated with session-level process as well as therapy outcome: It is increased in sessions rated by patients as manifesting high relationship quality and in patients experiencing high self-efficacy. Higher nonverbal synchrony characterized psychotherapies with higher symptom reduction. Conclusions: The results suggest that nonverbal synchrony embodies the patients' self-reported quality of the relationship and further variables of therapy process. This hitherto overlooked facet of therapeutic relationships might prove useful as an indicator of therapy progress and outcome. (PsycINFO Database Record (c) 2015 APA, all rights reserved)
Resumo:
Type 1 diabetes (T1D) is a common, multifactorial disease with strong familial clustering. In Finland, the incidence of T1D among children aged 14 years or under is the highest in the world. The increase in incidence has been approximately 2.4% per year. Although most new T1D cases are sporadic the first-degree relatives are at an increased risk of developing the same disease. This study was designed to examine the familial aggregation of T1D and one of its serious complications, diabetic nephropathy (DN). More specifically the study aimed (1) to determine the concordance rates of T1D in monozygotic (MZ) and dizygotic (DZ) twins and to estimate the relative contributions of genetic and environmental factors to the variability in liability to T1D as well as to study the age at onset of diabetes in twins; (2) to obtain long-term empirical estimates of the risk of T1D among siblings of T1D patients and the factors related to this risk, especially the effect of age at onset of diabetes in the proband and the birth cohort effect; (3) to establish if DN is aggregating in a Finnish population-based cohort of families with multiple cases of T1D, and to assess its magnitude and particularly to find out whether the risk of DN in siblings is varying according to the severity of DN in the proband and/or the age at onset of T1D: (4) to assess the recurrence risk of T1D in the offspring of a Finnish population-based cohort of patients with childhood onset T1D, and to investigate potential sex-related effects in the transmission of T1D from the diabetic parents to their offspring as well as to study whether there is a temporal trend in the incidence. The study population comprised of the Finnish Young Twin Cohort (22,650 twin pairs), a population-based cohort of patients with T1D diagnosed at the age of 17 years or earlier between 1965 and 1979 (n=5,144) and all their siblings (n=10,168) and offspring (n=5,291). A polygenic, multifactorial liability model was fitted to the twin data. Kaplan-Meier analyses were used to provide the cumulative incidence for the development of T1D and DN. Cox s proportional hazards models were fitted to the data. Poisson regression analysis was used to evaluate temporal trends in incidence. Standardized incidence ratios (SIRs) between the first-degree relatives of T1D patients and background population were determined. The twin study showed that the vast majority of affected MZ twin pairs remained discordant. Pairwise concordance for T1D was 27.3% in MZ and 3.8% in DZ twins. The probandwise concordance estimates were 42.9% and 7.4%, respectively. The model with additive genetic and individual environmental effects was the best-fitting liability model to T1D, with 88% of the phenotypic variance due to genetic factors. The second paper showed that the 50-year cumulative incidence of T1D in the siblings of diabetic probands was 6.9%. A young age at diagnosis in the probands considerably increased the risk. If the proband was diagnosed at the age of 0-4, 5-9, 10-14, 15 or more, the corresponding 40-year cumulative risks were 13.2%, 7.8%, 4.7% and 3.4%. The cumulative incidence increased with increasing birth year. However, SIR among children aged 14 years or under was approximately 12 throughout the follow-up. The third paper showed that diabetic siblings of the probands with nephropathy had a 2.3 times higher risk of DN compared with siblings of probands free of nephropathy. The presence of end stage renal disease (ESRD) in the proband increases the risk three-fold for diabetic siblings. Being diagnosed with diabetes during puberty (10-14) or a few years before (5-9) increased the susceptibility for DN in the siblings. The fourth paper revealed that of the offspring of male probands, 7.8% were affected by the age of 20 compared with 5.3% of the offspring of female probands. Offspring of fathers with T1D have 1.7 times greater risk to be affected with T1D than the offspring of mothers with T1D. The excess risk in the offspring of male fathers manifested itself through the higher risk the younger the father was when diagnosed with T1D. Young age at onset of diabetes in fathers increased the risk of T1D greatly in the offspring, but no such pattern was seen in the offspring of diabetic mothers. The SIR among offspring aged 14 years or under remained fairly constant throughout the follow-up, approximately 10. The present study has provided new knowledge on T1D recurrence risk in the first-degree relatives and the risk factors modifying the risk. Twin data demonstrated high genetic liability for T1D and increased heritability. The vast majority of affected MZ twin pairs, however, remain discordant for T1D. This study confirmed the drastic impact of the young age at onset of diabetes in the probands on the increased risk of T1D in the first-degree relatives. The only exception was the absence of this pattern in the offspring of T1D mothers. Both the sibling and the offspring recurrence risk studies revealed dynamic changes in the cumulative incidence of T1D in the first-degree relatives. SIRs among the first-degree relatives of T1D patients seems to remain fairly constant. The study demonstrates that the penetrance of the susceptibility genes for T1D may be low, although strongly influenced by the environmental factors. Presence of familial aggregation of DN was confirmed for the first time in a population-based study. Although the majority of the sibling pairs with T1D were discordant for DN, its presence in one sibling doubles and presence of ESRD triples the risk of DN in the other diabetic sibling. An encouraging observation was that although the proportion of children to be diagnosed with T1D at the age of 4 or under is increasing, they seem to have a decreased risk of DN or at least delayed onset.
Resumo:
Instruction reuse is a microarchitectural technique that improves the execution time of a program by removing redundant computations at run-time. Although this is the job of an optimizing compiler, they do not succeed many a time due to limited knowledge of run-time data. In this paper we examine instruction reuse of integer ALU and load instructions in network processing applications. Specifically, this paper attempts to answer the following questions: (1) How much of instruction reuse is inherent in network processing applications?, (2) Can reuse be improved by reducing interference in the reuse buffer?, (3) What characteristics of network applications can be exploited to improve reuse?, and (4) What is the effect of reuse on resource contention and memory accesses? We propose an aggregation scheme that combines the high-level concept of network traffic i.e. "flows" with a low level microarchitectural feature of programs i.e. repetition of instructions and data along with an architecture that exploits temporal locality in incoming packet data to improve reuse. We find that for the benchmarks considered, 1% to 50% of instructions are reused while the speedup achieved varies between 1% and 24%. As a side effect, instruction reuse reduces memory traffic and can therefore be considered as a scheme for low power.
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.