988 resultados para TIME STATISTICS
Resumo:
When dealing with the design of service networks, such as healthand EMS services, banking or distributed ticket selling services, thelocation of service centers has a strong influence on the congestion ateach of them, and consequently, on the quality of service. In this paper,several models are presented to consider service congestion. The firstmodel addresses the issue of the location of the least number of single--servercenters such that all the population is served within a standard distance,and nobody stands in line for a time longer than a given time--limit, or withmore than a predetermined number of other clients. We then formulateseveral maximal coverage models, with one or more servers per service center.A new heuristic is developed to solve the models and tested in a 30--nodesnetwork.
Resumo:
In this paper, generalizing results in Alòs, León and Vives (2007b), we see that the dependence of jumps in the volatility under a jump-diffusion stochastic volatility model, has no effect on the short-time behaviour of the at-the-money implied volatility skew, although the corresponding Hull and White formula depends on the jumps. Towards this end, we use Malliavin calculus techniques for Lévy processes based on Løkka (2004), Petrou (2006), and Solé, Utzet and Vives (2007).
Resumo:
We have analyzed the spatial accuracy of European foreign trade statistics compared to Latin American. We have also included USA s data because of the importance of this country in Latin American trade. We have developed a method for mapping discrepancies between exporters and importers, trying to isolate systematic spatial deviations. Although our results don t allow a unique explanation, they present some interesting clues to the distribution channels in the Latin American Continent as well as some spatial deviations for statistics in individual countries. Connecting our results with the literature specialized in the accuracy of foreign trade statistics; we can revisit Morgernstern (1963) as well as Federico and Tena (1991). Morgernstern had had a really pessimistic view on the reliability of this statistic source, but his main alert was focused on the trade balances, not in gross export or import values. Federico and Tena (1991) have demonstrated howaccuracy increases by aggregation, geographical and of product at the same time. But they still have a pessimistic view with relation to distribution questions, remarking that perhaps it will be more accurate to use import sources in this latest case. We have stated that the data set coming from foreign trade statistics for a sample in 1925, being it exporters or importers, it s a valuable tool for geography of trade patterns, although in some specific cases it needs some spatial adjustments.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
In this paper we use Malliavin calculus techniques to obtain an expression for the short-time behavior of the at-the-money implied volatility skew for a generalization of the Bates model, where the volatility does not need to be neither a difussion, nor a Markov process as the examples in section 7 show. This expression depends on the derivative of the volatility in the sense of Malliavin calculus.
Resumo:
This work proposes novel network analysis techniques for multivariate time series.We define the network of a multivariate time series as a graph where verticesdenote the components of the process and edges denote non zero long run partialcorrelations. We then introduce a two step LASSO procedure, called NETS, toestimate high dimensional sparse Long Run Partial Correlation networks. This approachis based on a VAR approximation of the process and allows to decomposethe long run linkages into the contribution of the dynamic and contemporaneousdependence relations of the system. The large sample properties of the estimatorare analysed and we establish conditions for consistent selection and estimation ofthe non zero long run partial correlations. The methodology is illustrated with anapplication to a panel of U.S. bluechips.
Resumo:
This study presents new evidence concerning the uneven processes of industrialization innineteenth century Spain and Italy based on a disaggregate analysis of the productivesectors from which the behaviour of the aggregate indices is comprised. The use of multivariate time-series analysis techniques can aid our understanding and characterization of these two processes of industrialization. The identification of those sectors with key rolesin leading industrial growth provides new evidence concerning the factors that governed thebehaviour of the aggregates in the two economies. In addition, the analysis of the existenceof interindustry linkages reveals the scale of the industrialization process, and wheresignificant differences exist, accounts for many of the divergences recorded in the historiography for the period 1850-1913.
Resumo:
The turn-on process of a multimode VCSEL is investigated from a statistical point of view. Special attention is paid to quantities such as time jitter and bit error rate. The single-mode performance of VCSEL¿s during current modulation is compared to that of edge-emitting lasers.
Resumo:
A theory is presented to explain the statistical properties of the growth of dye-laser radiation. Results are in agreement with recent experimental findings. The different roles of pump-noise intensity and correlation time are elucidated.
Resumo:
Convictions statistics were the first criminal statistics available in Europe during the nineteenth century. Their main weaknesses as crime measures and for comparative purposes were identified by Alphonse de Candolle in the 1830s. Currently, they are seldom used by comparative criminologists, although they provide a less valid but more reliable measure of crime and formal social control than police statistics. This article uses conviction statistics, compiled from the four editions of the European Sourcebook of Crime and Criminal Justice Statistics, to study the evolution of persons convicted in European countries from 1990 to 2006. Trends in persons convicted for six offences -intentional homicide, assault, rape, robbery, theft, and drug offences- and up to 26 European countries are analysed. These trends are established for the whole of Europe as well as for a cluster of Western European countries and a cluster of Central and Eastern European countries. The analyses show similarities between both regions of Europe at the beginning and at the end of the period under study. After a general increase of the rate of persons convicted in the early 1990s in the whole of Europe, trends followed different directions in Western and in Central and Eastern Europe. However, during the 2000s, it can be observed, throughout Europe, a certain stability of the rates of persons convicted for intentional homicides, accompanied by a general decrease of the rate of persons convicted for property offences, and an increase of the rate of those convicted for drug offences. The latter goes together with an increase of the rate of persons convicted for non lethal violent offences, which only reached some stability at the end of the time series. These trends show that there is no general crime drop in Europe. After a discussion of possible theoretical explanations, a multifactor model, inspired by opportunity-based theories, is proposed to explain the trends observed.
Resumo:
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollardeutsche mark future exchange, finding good agreement between theory and the observed data.
Resumo:
Photon migration in a turbid medium has been modeled in many different ways. The motivation for such modeling is based on technology that can be used to probe potentially diagnostic optical properties of biological tissue. Surprisingly, one of the more effective models is also one of the simplest. It is based on statistical properties of a nearest-neighbor lattice random walk. Here we develop a theory allowing one to calculate the number of visits by a photon to a given depth, if it is eventually detected at an absorbing surface. This mimics cw measurements made on biological tissue and is directed towards characterizing the depth reached by photons injected at the surface. Our development of the theory uses formalism based on the theory of a continuous-time random walk (CTRW). Formally exact results are given in the Fourier-Laplace domain, which, in turn, are used to generate approximations for parameters of physical interest.
Resumo:
This study presents new evidence concerning the uneven processes of industrialization innineteenth century Spain and Italy based on a disaggregate analysis of the productivesectors from which the behaviour of the aggregate indices is comprised. The use of multivariate time-series analysis techniques can aid our understanding and characterization of these two processes of industrialization. The identification of those sectors with key rolesin leading industrial growth provides new evidence concerning the factors that governed thebehaviour of the aggregates in the two economies. In addition, the analysis of the existenceof interindustry linkages reveals the scale of the industrialization process, and wheresignificant differences exist, accounts for many of the divergences recorded in the historiography for the period 1850-1913.
Resumo:
Robust estimators for accelerated failure time models with asymmetric (or symmetric) error distribution and censored observations are proposed. It is assumed that the error model belongs to a log-location-scale family of distributions and that the mean response is the parameter of interest. Since scale is a main component of mean, scale is not treated as a nuisance parameter. A three steps procedure is proposed. In the first step, an initial high breakdown point S estimate is computed. In the second step, observations that are unlikely under the estimated model are rejected or down weighted. Finally, a weighted maximum likelihood estimate is computed. To define the estimates, functions of censored residuals are replaced by their estimated conditional expectation given that the response is larger than the observed censored value. The rejection rule in the second step is based on an adaptive cut-off that, asymptotically, does not reject any observation when the data are generat ed according to the model. Therefore, the final estimate attains full efficiency at the model, with respect to the maximum likelihood estimate, while maintaining the breakdown point of the initial estimator. Asymptotic results are provided. The new procedure is evaluated with the help of Monte Carlo simulations. Two examples with real data are discussed.