28 resultados para Conditional and Unconditional Interval Estimator
Resumo:
Interference with time estimation from concurrent nontemporal processing has been shown to depend on the short-term memory requirements of the concurrent task (Fortin Breton, 1995; Fortin, Rousseau, Bourque, & Kirouac, 1993). In particular, it has been claimed that active processing of information in short-term memory produces interference, whereas simply maintaining information does not. Here, four experiments are reported in which subjects were trained to produce a 2,500-msec interval and then perform concurrent memory tasks. Interference with timing was demonstrated for concurrent memory tasks involving only maintenance. In one experiment, increasing set size in a pitch memory task systematically lengthened temporal production. Two further experiments suggested that this was due to a specific interaction between the short-term memory requirements of the pitch task and those of temporal production. In the final experiment, subjects performed temporal production while concurrently remembering the durations of a set of tones. Interference with interval production was comparable to that produced by the pitch memory task. Results are discussed in terms of a pacemaker-counter model of temporal processing, in which the counter component is supported by short-term memory.
Resumo:
This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.
Resumo:
Dense deployments of wireless local area networks (WLANs) are becoming a norm in many cities around the world. However, increased interference and traffic demands can severely limit the aggregate throughput achievable unless an effective channel assignment scheme is used. In this work, a simple and effective distributed channel assignment (DCA) scheme is proposed. It is shown that in order to maximise throughput, each access point (AP) simply chooses the channel with the minimum number of active neighbour nodes (i.e. nodes associated with neighbouring APs that have packets to send). However, application of such a scheme to practice depends critically on its ability to estimate the number of neighbour nodes in each channel, for which no practical estimator has been proposed before. In view of this, an extended Kalman filter (EKF) estimator and an estimate of the number of nodes by AP are proposed. These not only provide fast and accurate estimates but can also exploit channel switching information of neighbouring APs. Extensive packet level simulation results show that the proposed minimum neighbour and EKF estimator (MINEK) scheme is highly scalable and can provide significant throughput improvement over other channel assignment schemes.
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
Given a nonlinear model, a probabilistic forecast may be obtained by Monte Carlo simulations. At a given forecast horizon, Monte Carlo simulations yield sets of discrete forecasts, which can be converted to density forecasts. The resulting density forecasts will inevitably be downgraded by model mis-specification. In order to enhance the quality of the density forecasts, one can mix them with the unconditional density. This paper examines the value of combining conditional density forecasts with the unconditional density. The findings have positive implications for issuing early warnings in different disciplines including economics and meteorology, but UK inflation forecasts are considered as an example.
Resumo:
This paper studies the effects of increasing formality via tax reduction and simplification schemes on micro-firm performance. It uses the 1997 Brazilian SIMPLES program. We develop a simple theoretical model to show that SIMPLES has an impact only on a segment of the micro-firm population, for which the effect of formality on firm performance can be identified, and that can be analyzed along the single dimensional quantiles of the conditional firm revenues. To estimate the effect of formality, we use an econometric approach that compares eligible and non-eligible firms, born before and after SIMPLES in a local interval about the introduction of SIMPLES. We use an estimator that combines both quantile regression and the regression discontinuity identification strategy. The empirical results corroborate the positive effect of formality on microfirms' performance and produce a clear characterization of who benefits from these programs.
Resumo:
The evaluation of investment fund performance has been one of the main developments of modern portfolio theory. Most studies employ the technique developed by Jensen (1968) that compares a particular fund's returns to a benchmark portfolio of equal risk. However, the standard measures of fund manager performance are known to suffer from a number of problems in practice. In particular previous studies implicitly assume that the risk level of the portfolio is stationary through the evaluation period. That is unconditional measures of performance do not account for the fact that risk and expected returns may vary with the state of the economy. Therefore many of the problems encountered in previous performance studies reflect the inability of traditional measures to handle the dynamic behaviour of returns. As a consequence Ferson and Schadt (1996) suggest an approach to performance evaluation called conditional performance evaluation which is designed to address this problem. This paper utilises such a conditional measure of performance on a sample of 27 UK property funds, over the period 1987-1998. The results of which suggest that once the time varying nature of the funds beta is corrected for, by the addition of the market indicators, the average fund performance show an improvement over that of the traditional methods of analysis.
Resumo:
Despite the fact that mites were used at the dawn of forensic entomology to elucidate the postmortem interval, their use in current cases remains quite low for procedural reasons such as inadequate taxonomic knowledge. A special interest is focused on the phoretic stages of some mite species, because the phoront-host specificity allows us to deduce in many occasions the presence of the carrier (usually Diptera or Coleoptera) although it has not been seen in the sampling performed in situ or in the autopsy room. In this article, we describe two cases where Poecilochirus austroasiaticus Vitzthum (Acari: Parasitidae) was sampled in the autopsy room. In the first case, we could sample the host, Thanatophilus ruficornis (Küster) (Coleoptera: Silphidae), which was still carrying phoretic stages of the mite on the body. That attachment allowed, by observing starvation/feeding periods as a function of the digestive tract filling, the establishment of chronological cycles of phoretic behavior, showing maximum peaks of phoronts during arrival and departure from the corpse and the lowest values in the phase of host feeding. From the sarcosaprophagous fauna, we were able to determine in this case a minimum postmortem interval of 10 days. In the second case, we found no Silphidae at the place where the corpse was found or at the autopsy, but a postmortem interval of 13 days could be established by the high specificity of this interspecific relationship and the departure from the corpse of this family of Coleoptera.
Resumo:
Although early modern acting companies were adept at using different kinds of venue, performing indoors imposed a significant change in practice. Since indoor theatres required artificial lighting to augment the natural light admitted via windows, candles were employed; but the technology was such that candles could not last untended throughout an entire performance. Performing indoors thus introduced a new component into stage practice: the interval. This article explores what extant evidence (such as it is) might tell us about the introduction of act breaks, how they may have worked, and the implications for actors, audiences and dramatists. Ben Jonson's scripting of the interval in two late plays, The Staple of News and The Magnetic Lady, is examined for what it may suggest about actual practice, and the ways in which the interval may have been considered integral to composition and performance is explored through a reading of Middleton and Rowley's The Changeling. The interval offered playwrights a form of structural punctuation, drawing attention to how acts ended and began; actors could use the space to bring on props for use in the next act; spectators might use the pause between acts to reflect on what had happened and, perhaps, anticipate what was to come; and stage-sitters, the evidence indicates, often took advantage of the hiatus in the play to assert their presence in the space to which all eyes naturally were drawn.
Resumo:
We study a series of transient entries into the low-latitude boundary layer (LLBL) of all four Cluster spacecraft during an outbound pass through the mid-afternoon magnetopause ([X(GSM), Y(GSM), Z(GSM)] approximate to [2, 7, 9] R(E)). The events take place during an interval of northward IMF, as seen in the data from the ACE satellite and lagged by a propagation delay of 75 min that is well-defined by two separate studies: (1) the magnetospheric variations prior to the northward turning (Lockwood et al., 2001, this issue) and (2) the field clock angle seen by Cluster after it had emerged into the magnetosheath (Opgenoorth et al., 2001, this issue). With an additional lag of 16.5 min, the transient LLBL events cor-relate well with swings of the IMF clock angle (in GSM) to near 90degrees. Most of this additional lag is explained by ground-based observations, which reveal signatures of transient reconnection in the pre-noon sector that then take 10-15 min to propagate eastward to 15 MLT, where they are observed by Cluster. The eastward phase speed of these signatures agrees very well with the motion deduced by the cross-correlation of the signatures seen on the four Cluster spacecraft. The evidence that these events are reconnection pulses includes: transient erosion of the noon 630 nm (cusp/cleft) aurora to lower latitudes; transient and travelling enhancements of the flow into the polar cap, imaged by the AMIE technique; and poleward-moving events moving into the polar cap, seen by the EISCAT Svalbard Radar (ESR). A pass of the DMSP-F15 satellite reveals that the open field lines near noon have been opened for some time: the more recently opened field lines were found closer to dusk where the flow transient and the poleward-moving event intersected the satellite pass. The events at Cluster have ion and electron characteristics predicted and observed by Lockwood and Hapgood (1998) for a Flux Transfer Event (FTE), with allowance for magnetospheric ion reflection at Alfvenic disturbances in the magnetopause reconnection layer. Like FTEs, the events are about 1 R(E) in their direction of motion and show a rise in the magnetic field strength, but unlike FTEs, in general, they show no pressure excess in their core and hence, no characteristic bipolar signature in the boundary-normal component. However, most of the events were observed when the magnetic field was southward, i.e. on the edge of the interior magnetic cusp, or when the field was parallel to the magnetic equatorial plane. Only when the satellite begins to emerge from the exterior boundary (when the field was northward), do the events start to show a pressure excess in their core and the consequent bipolar signature. We identify the events as the first observations of FTEs at middle altitudes.